00:00:00.001 Started by upstream project "spdk-dpdk-per-patch" build number 263 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.120 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.121 The recommended git tool is: git 00:00:00.121 using credential 00000000-0000-0000-0000-000000000002 00:00:00.126 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.148 Fetching changes from the remote Git repository 00:00:00.149 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.175 Using shallow fetch with depth 1 00:00:00.175 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.175 > git --version # timeout=10 00:00:00.190 > git --version # 'git version 2.39.2' 00:00:00.190 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.191 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.191 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:05.252 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:05.263 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:05.273 Checking out Revision 9b8cb13ca58b20128762541e7d6e360f21b83f5a (FETCH_HEAD) 00:00:05.273 > git config core.sparsecheckout # timeout=10 00:00:05.286 > git read-tree -mu HEAD # timeout=10 00:00:05.301 > git checkout -f 9b8cb13ca58b20128762541e7d6e360f21b83f5a # timeout=5 00:00:05.320 Commit message: "inventory: repurpose WFP74 and WFP75 to dev systems" 00:00:05.320 > git rev-list --no-walk 9b8cb13ca58b20128762541e7d6e360f21b83f5a # timeout=10 00:00:05.402 [Pipeline] Start of Pipeline 00:00:05.418 [Pipeline] library 00:00:05.420 Loading library shm_lib@master 00:00:05.421 Library shm_lib@master is cached. Copying from home. 00:00:05.441 [Pipeline] node 00:00:05.451 Running on WFP5 in /var/jenkins/workspace/nvmf-phy-autotest 00:00:05.453 [Pipeline] { 00:00:05.470 [Pipeline] catchError 00:00:05.471 [Pipeline] { 00:00:05.487 [Pipeline] wrap 00:00:05.495 [Pipeline] { 00:00:05.500 [Pipeline] stage 00:00:05.501 [Pipeline] { (Prologue) 00:00:05.775 [Pipeline] sh 00:00:06.053 + logger -p user.info -t JENKINS-CI 00:00:06.071 [Pipeline] echo 00:00:06.073 Node: WFP5 00:00:06.079 [Pipeline] sh 00:00:06.371 [Pipeline] setCustomBuildProperty 00:00:06.379 [Pipeline] echo 00:00:06.380 Cleanup processes 00:00:06.384 [Pipeline] sh 00:00:06.659 + sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:00:06.659 2760688 sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:00:06.672 [Pipeline] sh 00:00:06.950 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:00:06.950 ++ grep -v 'sudo pgrep' 00:00:06.950 ++ awk '{print $1}' 00:00:06.950 + sudo kill -9 00:00:06.950 + true 00:00:06.962 [Pipeline] cleanWs 00:00:06.968 [WS-CLEANUP] Deleting project workspace... 00:00:06.968 [WS-CLEANUP] Deferred wipeout is used... 00:00:06.972 [WS-CLEANUP] done 00:00:06.976 [Pipeline] setCustomBuildProperty 00:00:06.987 [Pipeline] sh 00:00:07.263 + sudo git config --global --replace-all safe.directory '*' 00:00:07.346 [Pipeline] nodesByLabel 00:00:07.348 Found a total of 1 nodes with the 'sorcerer' label 00:00:07.360 [Pipeline] httpRequest 00:00:07.365 HttpMethod: GET 00:00:07.366 URL: http://10.211.164.101/packages/jbp_9b8cb13ca58b20128762541e7d6e360f21b83f5a.tar.gz 00:00:07.368 Sending request to url: http://10.211.164.101/packages/jbp_9b8cb13ca58b20128762541e7d6e360f21b83f5a.tar.gz 00:00:07.376 Response Code: HTTP/1.1 200 OK 00:00:07.376 Success: Status code 200 is in the accepted range: 200,404 00:00:07.377 Saving response body to /var/jenkins/workspace/nvmf-phy-autotest/jbp_9b8cb13ca58b20128762541e7d6e360f21b83f5a.tar.gz 00:00:10.120 [Pipeline] sh 00:00:10.395 + tar --no-same-owner -xf jbp_9b8cb13ca58b20128762541e7d6e360f21b83f5a.tar.gz 00:00:10.415 [Pipeline] httpRequest 00:00:10.419 HttpMethod: GET 00:00:10.420 URL: http://10.211.164.101/packages/spdk_cf8ec7cfe7cc045dd74b4dc37b0f52cad9732631.tar.gz 00:00:10.420 Sending request to url: http://10.211.164.101/packages/spdk_cf8ec7cfe7cc045dd74b4dc37b0f52cad9732631.tar.gz 00:00:10.436 Response Code: HTTP/1.1 200 OK 00:00:10.437 Success: Status code 200 is in the accepted range: 200,404 00:00:10.437 Saving response body to /var/jenkins/workspace/nvmf-phy-autotest/spdk_cf8ec7cfe7cc045dd74b4dc37b0f52cad9732631.tar.gz 00:00:53.704 [Pipeline] sh 00:00:53.983 + tar --no-same-owner -xf spdk_cf8ec7cfe7cc045dd74b4dc37b0f52cad9732631.tar.gz 00:00:56.525 [Pipeline] sh 00:00:56.807 + git -C spdk log --oneline -n5 00:00:56.807 cf8ec7cfe version: 24.09-pre 00:00:56.807 2d6134546 lib/ftl: Handle trim requests without VSS 00:00:56.807 106ad3793 lib/ftl: Rename unmap to trim 00:00:56.807 5555d51c8 lib/ftl: Add means to create new layout regions 00:00:56.807 5d89ebb72 lib/ftl: Add deinit handler to FTL mngt 00:00:56.819 [Pipeline] sh 00:00:57.096 + git -C spdk/dpdk fetch https://review.spdk.io/gerrit/spdk/dpdk refs/changes/89/22689/9 00:00:57.663 From https://review.spdk.io/gerrit/spdk/dpdk 00:00:57.663 * branch refs/changes/89/22689/9 -> FETCH_HEAD 00:00:57.674 [Pipeline] sh 00:00:57.952 + git -C spdk/dpdk checkout FETCH_HEAD 00:00:58.210 Previous HEAD position was 08f3a46de7 pmdinfogen: avoid empty string in ELFSymbol() 00:00:58.210 HEAD is now at 34c818cd83 isal: compile compress_isal PMD without system-wide libisal 00:00:58.219 [Pipeline] } 00:00:58.235 [Pipeline] // stage 00:00:58.243 [Pipeline] stage 00:00:58.245 [Pipeline] { (Prepare) 00:00:58.264 [Pipeline] writeFile 00:00:58.283 [Pipeline] sh 00:00:58.563 + logger -p user.info -t JENKINS-CI 00:00:58.576 [Pipeline] sh 00:00:58.857 + logger -p user.info -t JENKINS-CI 00:00:58.868 [Pipeline] sh 00:00:59.147 + cat autorun-spdk.conf 00:00:59.148 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:59.148 SPDK_TEST_NVMF=1 00:00:59.148 SPDK_TEST_NVME_CLI=1 00:00:59.148 SPDK_TEST_NVMF_NICS=mlx5 00:00:59.148 SPDK_RUN_UBSAN=1 00:00:59.148 NET_TYPE=phy 00:00:59.154 RUN_NIGHTLY= 00:00:59.158 [Pipeline] readFile 00:00:59.180 [Pipeline] withEnv 00:00:59.182 [Pipeline] { 00:00:59.195 [Pipeline] sh 00:00:59.476 + set -ex 00:00:59.476 + [[ -f /var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf ]] 00:00:59.476 + source /var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf 00:00:59.476 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:59.476 ++ SPDK_TEST_NVMF=1 00:00:59.476 ++ SPDK_TEST_NVME_CLI=1 00:00:59.476 ++ SPDK_TEST_NVMF_NICS=mlx5 00:00:59.476 ++ SPDK_RUN_UBSAN=1 00:00:59.476 ++ NET_TYPE=phy 00:00:59.476 ++ RUN_NIGHTLY= 00:00:59.476 + case $SPDK_TEST_NVMF_NICS in 00:00:59.476 + DRIVERS=mlx5_ib 00:00:59.476 + [[ -n mlx5_ib ]] 00:00:59.476 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:00:59.476 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:06.045 rmmod: ERROR: Module irdma is not currently loaded 00:01:06.045 rmmod: ERROR: Module i40iw is not currently loaded 00:01:06.045 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:06.045 + true 00:01:06.045 + for D in $DRIVERS 00:01:06.045 + sudo modprobe mlx5_ib 00:01:06.045 + exit 0 00:01:06.053 [Pipeline] } 00:01:06.073 [Pipeline] // withEnv 00:01:06.078 [Pipeline] } 00:01:06.098 [Pipeline] // stage 00:01:06.112 [Pipeline] catchError 00:01:06.114 [Pipeline] { 00:01:06.126 [Pipeline] timeout 00:01:06.126 Timeout set to expire in 40 min 00:01:06.127 [Pipeline] { 00:01:06.139 [Pipeline] stage 00:01:06.141 [Pipeline] { (Tests) 00:01:06.155 [Pipeline] sh 00:01:06.434 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-phy-autotest 00:01:06.434 ++ readlink -f /var/jenkins/workspace/nvmf-phy-autotest 00:01:06.434 + DIR_ROOT=/var/jenkins/workspace/nvmf-phy-autotest 00:01:06.434 + [[ -n /var/jenkins/workspace/nvmf-phy-autotest ]] 00:01:06.434 + DIR_SPDK=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:01:06.434 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-phy-autotest/output 00:01:06.434 + [[ -d /var/jenkins/workspace/nvmf-phy-autotest/spdk ]] 00:01:06.434 + [[ ! -d /var/jenkins/workspace/nvmf-phy-autotest/output ]] 00:01:06.434 + mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/output 00:01:06.434 + [[ -d /var/jenkins/workspace/nvmf-phy-autotest/output ]] 00:01:06.434 + cd /var/jenkins/workspace/nvmf-phy-autotest 00:01:06.434 + source /etc/os-release 00:01:06.434 ++ NAME='Fedora Linux' 00:01:06.434 ++ VERSION='38 (Cloud Edition)' 00:01:06.434 ++ ID=fedora 00:01:06.434 ++ VERSION_ID=38 00:01:06.434 ++ VERSION_CODENAME= 00:01:06.434 ++ PLATFORM_ID=platform:f38 00:01:06.434 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:01:06.434 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:06.434 ++ LOGO=fedora-logo-icon 00:01:06.434 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:01:06.434 ++ HOME_URL=https://fedoraproject.org/ 00:01:06.434 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:01:06.434 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:06.434 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:06.434 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:06.434 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:01:06.434 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:06.434 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:01:06.435 ++ SUPPORT_END=2024-05-14 00:01:06.435 ++ VARIANT='Cloud Edition' 00:01:06.435 ++ VARIANT_ID=cloud 00:01:06.435 + uname -a 00:01:06.435 Linux spdk-wfp-05 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:01:06.435 + sudo /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh status 00:01:08.966 Hugepages 00:01:08.966 node hugesize free / total 00:01:08.966 node0 1048576kB 0 / 0 00:01:08.966 node0 2048kB 0 / 0 00:01:08.966 node1 1048576kB 0 / 0 00:01:08.966 node1 2048kB 0 / 0 00:01:08.966 00:01:08.966 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:08.966 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:01:08.966 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:01:08.966 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:01:08.966 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:01:09.225 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:01:09.225 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:01:09.225 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:01:09.225 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:01:09.225 NVMe 0000:5f:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:01:09.225 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:01:09.225 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:01:09.225 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:01:09.225 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:01:09.225 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:01:09.225 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:01:09.225 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:01:09.225 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:01:09.225 + rm -f /tmp/spdk-ld-path 00:01:09.225 + source autorun-spdk.conf 00:01:09.225 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:09.225 ++ SPDK_TEST_NVMF=1 00:01:09.225 ++ SPDK_TEST_NVME_CLI=1 00:01:09.225 ++ SPDK_TEST_NVMF_NICS=mlx5 00:01:09.225 ++ SPDK_RUN_UBSAN=1 00:01:09.225 ++ NET_TYPE=phy 00:01:09.225 ++ RUN_NIGHTLY= 00:01:09.225 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:09.225 + [[ -n '' ]] 00:01:09.225 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:01:09.225 + for M in /var/spdk/build-*-manifest.txt 00:01:09.225 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:09.225 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-phy-autotest/output/ 00:01:09.225 + for M in /var/spdk/build-*-manifest.txt 00:01:09.225 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:09.225 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-phy-autotest/output/ 00:01:09.225 ++ uname 00:01:09.225 + [[ Linux == \L\i\n\u\x ]] 00:01:09.225 + sudo dmesg -T 00:01:09.225 + sudo dmesg --clear 00:01:09.225 + dmesg_pid=2761760 00:01:09.225 + [[ Fedora Linux == FreeBSD ]] 00:01:09.225 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:09.225 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:09.225 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:09.225 + export VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:01:09.225 + VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:01:09.225 + [[ -x /usr/src/fio-static/fio ]] 00:01:09.225 + export FIO_BIN=/usr/src/fio-static/fio 00:01:09.225 + FIO_BIN=/usr/src/fio-static/fio 00:01:09.225 + sudo dmesg -Tw 00:01:09.225 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:09.225 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:09.225 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:09.225 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:09.225 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:09.225 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:09.225 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:09.225 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:09.225 + spdk/autorun.sh /var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf 00:01:09.483 Test configuration: 00:01:09.483 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:09.483 SPDK_TEST_NVMF=1 00:01:09.483 SPDK_TEST_NVME_CLI=1 00:01:09.483 SPDK_TEST_NVMF_NICS=mlx5 00:01:09.483 SPDK_RUN_UBSAN=1 00:01:09.483 NET_TYPE=phy 00:01:09.483 RUN_NIGHTLY= 20:10:22 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:01:09.483 20:10:22 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:09.483 20:10:22 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:09.483 20:10:22 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:09.483 20:10:22 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:09.483 20:10:22 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:09.483 20:10:22 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:09.483 20:10:22 -- paths/export.sh@5 -- $ export PATH 00:01:09.483 20:10:22 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:09.483 20:10:22 -- common/autobuild_common.sh@436 -- $ out=/var/jenkins/workspace/nvmf-phy-autotest/spdk/../output 00:01:09.483 20:10:22 -- common/autobuild_common.sh@437 -- $ date +%s 00:01:09.483 20:10:22 -- common/autobuild_common.sh@437 -- $ mktemp -dt spdk_1715883022.XXXXXX 00:01:09.483 20:10:22 -- common/autobuild_common.sh@437 -- $ SPDK_WORKSPACE=/tmp/spdk_1715883022.23t1Ii 00:01:09.483 20:10:22 -- common/autobuild_common.sh@439 -- $ [[ -n '' ]] 00:01:09.483 20:10:22 -- common/autobuild_common.sh@443 -- $ '[' -n '' ']' 00:01:09.483 20:10:22 -- common/autobuild_common.sh@446 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/' 00:01:09.483 20:10:22 -- common/autobuild_common.sh@450 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:09.483 20:10:22 -- common/autobuild_common.sh@452 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:09.483 20:10:22 -- common/autobuild_common.sh@453 -- $ get_config_params 00:01:09.483 20:10:22 -- common/autotest_common.sh@395 -- $ xtrace_disable 00:01:09.483 20:10:22 -- common/autotest_common.sh@10 -- $ set +x 00:01:09.483 20:10:22 -- common/autobuild_common.sh@453 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk' 00:01:09.483 20:10:22 -- common/autobuild_common.sh@455 -- $ start_monitor_resources 00:01:09.483 20:10:22 -- pm/common@17 -- $ local monitor 00:01:09.483 20:10:22 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:09.483 20:10:22 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:09.483 20:10:22 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:09.483 20:10:22 -- pm/common@21 -- $ date +%s 00:01:09.483 20:10:22 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:09.483 20:10:22 -- pm/common@21 -- $ date +%s 00:01:09.483 20:10:22 -- pm/common@25 -- $ sleep 1 00:01:09.483 20:10:22 -- pm/common@21 -- $ date +%s 00:01:09.483 20:10:22 -- pm/common@21 -- $ date +%s 00:01:09.483 20:10:22 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1715883022 00:01:09.483 20:10:22 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1715883022 00:01:09.483 20:10:22 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1715883022 00:01:09.483 20:10:22 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1715883022 00:01:09.483 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1715883022_collect-vmstat.pm.log 00:01:09.483 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1715883022_collect-cpu-temp.pm.log 00:01:09.483 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1715883022_collect-cpu-load.pm.log 00:01:09.483 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1715883022_collect-bmc-pm.bmc.pm.log 00:01:10.416 20:10:23 -- common/autobuild_common.sh@456 -- $ trap stop_monitor_resources EXIT 00:01:10.416 20:10:23 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:10.416 20:10:23 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:10.416 20:10:23 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:01:10.416 20:10:23 -- spdk/autobuild.sh@16 -- $ date -u 00:01:10.416 Thu May 16 06:10:23 PM UTC 2024 00:01:10.416 20:10:23 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:10.416 v24.09-pre 00:01:10.416 20:10:23 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:10.416 20:10:23 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:10.416 20:10:23 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:10.416 20:10:23 -- common/autotest_common.sh@1097 -- $ '[' 3 -le 1 ']' 00:01:10.416 20:10:23 -- common/autotest_common.sh@1103 -- $ xtrace_disable 00:01:10.416 20:10:23 -- common/autotest_common.sh@10 -- $ set +x 00:01:10.416 ************************************ 00:01:10.416 START TEST ubsan 00:01:10.416 ************************************ 00:01:10.416 20:10:23 ubsan -- common/autotest_common.sh@1121 -- $ echo 'using ubsan' 00:01:10.416 using ubsan 00:01:10.416 00:01:10.416 real 0m0.000s 00:01:10.416 user 0m0.000s 00:01:10.416 sys 0m0.000s 00:01:10.416 20:10:23 ubsan -- common/autotest_common.sh@1122 -- $ xtrace_disable 00:01:10.416 20:10:23 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:10.416 ************************************ 00:01:10.416 END TEST ubsan 00:01:10.416 ************************************ 00:01:10.675 20:10:23 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:10.675 20:10:23 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:10.675 20:10:23 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:10.675 20:10:23 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:10.675 20:10:23 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:10.675 20:10:23 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:10.675 20:10:23 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:10.675 20:10:23 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:10.675 20:10:23 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-shared 00:01:10.675 Using default SPDK env in /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk 00:01:10.675 Using default DPDK in /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build 00:01:10.932 Using 'verbs' RDMA provider 00:01:24.123 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:34.096 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:34.354 Creating mk/config.mk...done. 00:01:34.355 Creating mk/cc.flags.mk...done. 00:01:34.355 Type 'make' to build. 00:01:34.355 20:10:47 -- spdk/autobuild.sh@69 -- $ run_test make make -j96 00:01:34.355 20:10:47 -- common/autotest_common.sh@1097 -- $ '[' 3 -le 1 ']' 00:01:34.355 20:10:47 -- common/autotest_common.sh@1103 -- $ xtrace_disable 00:01:34.355 20:10:47 -- common/autotest_common.sh@10 -- $ set +x 00:01:34.612 ************************************ 00:01:34.612 START TEST make 00:01:34.612 ************************************ 00:01:34.612 20:10:47 make -- common/autotest_common.sh@1121 -- $ make -j96 00:01:34.870 make[1]: Nothing to be done for 'all'. 00:01:43.013 The Meson build system 00:01:43.013 Version: 1.3.1 00:01:43.013 Source dir: /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk 00:01:43.013 Build dir: /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build-tmp 00:01:43.013 Build type: native build 00:01:43.013 Program cat found: YES (/usr/bin/cat) 00:01:43.013 Project name: DPDK 00:01:43.013 Project version: 24.03.0 00:01:43.013 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:43.013 C linker for the host machine: cc ld.bfd 2.39-16 00:01:43.013 Host machine cpu family: x86_64 00:01:43.013 Host machine cpu: x86_64 00:01:43.013 Message: ## Building in Developer Mode ## 00:01:43.013 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:43.013 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:43.013 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:43.013 Program python3 found: YES (/usr/bin/python3) 00:01:43.013 Program cat found: YES (/usr/bin/cat) 00:01:43.013 Compiler for C supports arguments -march=native: YES 00:01:43.013 Checking for size of "void *" : 8 00:01:43.013 Checking for size of "void *" : 8 (cached) 00:01:43.013 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:01:43.013 Library m found: YES 00:01:43.013 Library numa found: YES 00:01:43.013 Has header "numaif.h" : YES 00:01:43.013 Library fdt found: NO 00:01:43.013 Library execinfo found: NO 00:01:43.013 Has header "execinfo.h" : YES 00:01:43.013 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:43.013 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:43.013 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:43.013 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:43.013 Run-time dependency openssl found: YES 3.0.9 00:01:43.013 Run-time dependency libpcap found: YES 1.10.4 00:01:43.013 Has header "pcap.h" with dependency libpcap: YES 00:01:43.013 Compiler for C supports arguments -Wcast-qual: YES 00:01:43.013 Compiler for C supports arguments -Wdeprecated: YES 00:01:43.013 Compiler for C supports arguments -Wformat: YES 00:01:43.013 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:43.013 Compiler for C supports arguments -Wformat-security: NO 00:01:43.013 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:43.013 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:43.013 Compiler for C supports arguments -Wnested-externs: YES 00:01:43.013 Compiler for C supports arguments -Wold-style-definition: YES 00:01:43.013 Compiler for C supports arguments -Wpointer-arith: YES 00:01:43.013 Compiler for C supports arguments -Wsign-compare: YES 00:01:43.013 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:43.013 Compiler for C supports arguments -Wundef: YES 00:01:43.013 Compiler for C supports arguments -Wwrite-strings: YES 00:01:43.013 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:43.013 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:43.013 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:43.013 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:43.013 Program objdump found: YES (/usr/bin/objdump) 00:01:43.013 Compiler for C supports arguments -mavx512f: YES 00:01:43.013 Checking if "AVX512 checking" compiles: YES 00:01:43.013 Fetching value of define "__SSE4_2__" : 1 00:01:43.013 Fetching value of define "__AES__" : 1 00:01:43.013 Fetching value of define "__AVX__" : 1 00:01:43.013 Fetching value of define "__AVX2__" : 1 00:01:43.013 Fetching value of define "__AVX512BW__" : 1 00:01:43.013 Fetching value of define "__AVX512CD__" : 1 00:01:43.013 Fetching value of define "__AVX512DQ__" : 1 00:01:43.013 Fetching value of define "__AVX512F__" : 1 00:01:43.013 Fetching value of define "__AVX512VL__" : 1 00:01:43.013 Fetching value of define "__PCLMUL__" : 1 00:01:43.013 Fetching value of define "__RDRND__" : 1 00:01:43.013 Fetching value of define "__RDSEED__" : 1 00:01:43.013 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:43.013 Fetching value of define "__znver1__" : (undefined) 00:01:43.013 Fetching value of define "__znver2__" : (undefined) 00:01:43.013 Fetching value of define "__znver3__" : (undefined) 00:01:43.013 Fetching value of define "__znver4__" : (undefined) 00:01:43.013 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:43.013 Message: lib/log: Defining dependency "log" 00:01:43.013 Message: lib/kvargs: Defining dependency "kvargs" 00:01:43.013 Message: lib/telemetry: Defining dependency "telemetry" 00:01:43.013 Checking for function "getentropy" : NO 00:01:43.013 Message: lib/eal: Defining dependency "eal" 00:01:43.013 Message: lib/ring: Defining dependency "ring" 00:01:43.013 Message: lib/rcu: Defining dependency "rcu" 00:01:43.013 Message: lib/mempool: Defining dependency "mempool" 00:01:43.013 Message: lib/mbuf: Defining dependency "mbuf" 00:01:43.013 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:43.013 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:43.013 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:43.013 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:43.013 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:43.013 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:01:43.013 Compiler for C supports arguments -mpclmul: YES 00:01:43.013 Compiler for C supports arguments -maes: YES 00:01:43.013 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:43.013 Compiler for C supports arguments -mavx512bw: YES 00:01:43.013 Compiler for C supports arguments -mavx512dq: YES 00:01:43.013 Compiler for C supports arguments -mavx512vl: YES 00:01:43.013 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:43.013 Compiler for C supports arguments -mavx2: YES 00:01:43.013 Compiler for C supports arguments -mavx: YES 00:01:43.013 Message: lib/net: Defining dependency "net" 00:01:43.013 Message: lib/meter: Defining dependency "meter" 00:01:43.013 Message: lib/ethdev: Defining dependency "ethdev" 00:01:43.013 Message: lib/pci: Defining dependency "pci" 00:01:43.013 Message: lib/cmdline: Defining dependency "cmdline" 00:01:43.013 Message: lib/hash: Defining dependency "hash" 00:01:43.013 Message: lib/timer: Defining dependency "timer" 00:01:43.013 Message: lib/compressdev: Defining dependency "compressdev" 00:01:43.013 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:43.013 Message: lib/dmadev: Defining dependency "dmadev" 00:01:43.013 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:43.013 Message: lib/power: Defining dependency "power" 00:01:43.013 Message: lib/reorder: Defining dependency "reorder" 00:01:43.013 Message: lib/security: Defining dependency "security" 00:01:43.013 lib/meson.build:163: WARNING: Cannot disable mandatory library "stack" 00:01:43.013 Message: lib/stack: Defining dependency "stack" 00:01:43.013 Has header "linux/userfaultfd.h" : YES 00:01:43.013 Has header "linux/vduse.h" : YES 00:01:43.013 Message: lib/vhost: Defining dependency "vhost" 00:01:43.013 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:43.013 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:43.013 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:43.013 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:43.013 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:43.013 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:43.013 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:43.013 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:43.013 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:43.013 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:43.013 Program doxygen found: YES (/usr/bin/doxygen) 00:01:43.013 Configuring doxy-api-html.conf using configuration 00:01:43.013 Configuring doxy-api-man.conf using configuration 00:01:43.013 Program mandb found: YES (/usr/bin/mandb) 00:01:43.013 Program sphinx-build found: NO 00:01:43.013 Configuring rte_build_config.h using configuration 00:01:43.013 Message: 00:01:43.013 ================= 00:01:43.014 Applications Enabled 00:01:43.014 ================= 00:01:43.014 00:01:43.014 apps: 00:01:43.014 00:01:43.014 00:01:43.014 Message: 00:01:43.014 ================= 00:01:43.014 Libraries Enabled 00:01:43.014 ================= 00:01:43.014 00:01:43.014 libs: 00:01:43.014 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:43.014 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:43.014 cryptodev, dmadev, power, reorder, security, stack, vhost, 00:01:43.014 00:01:43.014 Message: 00:01:43.014 =============== 00:01:43.014 Drivers Enabled 00:01:43.014 =============== 00:01:43.014 00:01:43.014 common: 00:01:43.014 00:01:43.014 bus: 00:01:43.014 pci, vdev, 00:01:43.014 mempool: 00:01:43.014 ring, 00:01:43.014 dma: 00:01:43.014 00:01:43.014 net: 00:01:43.014 00:01:43.014 crypto: 00:01:43.014 00:01:43.014 compress: 00:01:43.014 00:01:43.014 vdpa: 00:01:43.014 00:01:43.014 00:01:43.014 Message: 00:01:43.014 ================= 00:01:43.014 Content Skipped 00:01:43.014 ================= 00:01:43.014 00:01:43.014 apps: 00:01:43.014 dumpcap: explicitly disabled via build config 00:01:43.014 graph: explicitly disabled via build config 00:01:43.014 pdump: explicitly disabled via build config 00:01:43.014 proc-info: explicitly disabled via build config 00:01:43.014 test-acl: explicitly disabled via build config 00:01:43.014 test-bbdev: explicitly disabled via build config 00:01:43.014 test-cmdline: explicitly disabled via build config 00:01:43.014 test-compress-perf: explicitly disabled via build config 00:01:43.014 test-crypto-perf: explicitly disabled via build config 00:01:43.014 test-dma-perf: explicitly disabled via build config 00:01:43.014 test-eventdev: explicitly disabled via build config 00:01:43.014 test-fib: explicitly disabled via build config 00:01:43.014 test-flow-perf: explicitly disabled via build config 00:01:43.014 test-gpudev: explicitly disabled via build config 00:01:43.014 test-mldev: explicitly disabled via build config 00:01:43.014 test-pipeline: explicitly disabled via build config 00:01:43.014 test-pmd: explicitly disabled via build config 00:01:43.014 test-regex: explicitly disabled via build config 00:01:43.014 test-sad: explicitly disabled via build config 00:01:43.014 test-security-perf: explicitly disabled via build config 00:01:43.014 00:01:43.014 libs: 00:01:43.014 argparse: explicitly disabled via build config 00:01:43.014 metrics: explicitly disabled via build config 00:01:43.014 acl: explicitly disabled via build config 00:01:43.014 bbdev: explicitly disabled via build config 00:01:43.014 bitratestats: explicitly disabled via build config 00:01:43.014 bpf: explicitly disabled via build config 00:01:43.014 cfgfile: explicitly disabled via build config 00:01:43.014 distributor: explicitly disabled via build config 00:01:43.014 efd: explicitly disabled via build config 00:01:43.014 eventdev: explicitly disabled via build config 00:01:43.014 dispatcher: explicitly disabled via build config 00:01:43.014 gpudev: explicitly disabled via build config 00:01:43.014 gro: explicitly disabled via build config 00:01:43.014 gso: explicitly disabled via build config 00:01:43.014 ip_frag: explicitly disabled via build config 00:01:43.014 jobstats: explicitly disabled via build config 00:01:43.014 latencystats: explicitly disabled via build config 00:01:43.014 lpm: explicitly disabled via build config 00:01:43.014 member: explicitly disabled via build config 00:01:43.014 pcapng: explicitly disabled via build config 00:01:43.014 rawdev: explicitly disabled via build config 00:01:43.014 regexdev: explicitly disabled via build config 00:01:43.014 mldev: explicitly disabled via build config 00:01:43.014 rib: explicitly disabled via build config 00:01:43.014 sched: explicitly disabled via build config 00:01:43.014 ipsec: explicitly disabled via build config 00:01:43.014 pdcp: explicitly disabled via build config 00:01:43.014 fib: explicitly disabled via build config 00:01:43.014 port: explicitly disabled via build config 00:01:43.014 pdump: explicitly disabled via build config 00:01:43.014 table: explicitly disabled via build config 00:01:43.014 pipeline: explicitly disabled via build config 00:01:43.014 graph: explicitly disabled via build config 00:01:43.014 node: explicitly disabled via build config 00:01:43.014 00:01:43.014 drivers: 00:01:43.014 common/cpt: not in enabled drivers build config 00:01:43.014 common/dpaax: not in enabled drivers build config 00:01:43.014 common/iavf: not in enabled drivers build config 00:01:43.014 common/idpf: not in enabled drivers build config 00:01:43.014 common/ionic: not in enabled drivers build config 00:01:43.014 common/mvep: not in enabled drivers build config 00:01:43.014 common/octeontx: not in enabled drivers build config 00:01:43.014 bus/auxiliary: not in enabled drivers build config 00:01:43.014 bus/cdx: not in enabled drivers build config 00:01:43.014 bus/dpaa: not in enabled drivers build config 00:01:43.014 bus/fslmc: not in enabled drivers build config 00:01:43.014 bus/ifpga: not in enabled drivers build config 00:01:43.014 bus/platform: not in enabled drivers build config 00:01:43.014 bus/uacce: not in enabled drivers build config 00:01:43.014 bus/vmbus: not in enabled drivers build config 00:01:43.014 common/cnxk: not in enabled drivers build config 00:01:43.014 common/mlx5: not in enabled drivers build config 00:01:43.014 common/nfp: not in enabled drivers build config 00:01:43.014 common/nitrox: not in enabled drivers build config 00:01:43.014 common/qat: not in enabled drivers build config 00:01:43.014 common/sfc_efx: not in enabled drivers build config 00:01:43.014 mempool/bucket: not in enabled drivers build config 00:01:43.014 mempool/cnxk: not in enabled drivers build config 00:01:43.014 mempool/dpaa: not in enabled drivers build config 00:01:43.014 mempool/dpaa2: not in enabled drivers build config 00:01:43.014 mempool/octeontx: not in enabled drivers build config 00:01:43.014 mempool/stack: not in enabled drivers build config 00:01:43.014 dma/cnxk: not in enabled drivers build config 00:01:43.014 dma/dpaa: not in enabled drivers build config 00:01:43.014 dma/dpaa2: not in enabled drivers build config 00:01:43.014 dma/hisilicon: not in enabled drivers build config 00:01:43.014 dma/idxd: not in enabled drivers build config 00:01:43.014 dma/ioat: not in enabled drivers build config 00:01:43.014 dma/skeleton: not in enabled drivers build config 00:01:43.014 net/af_packet: not in enabled drivers build config 00:01:43.014 net/af_xdp: not in enabled drivers build config 00:01:43.014 net/ark: not in enabled drivers build config 00:01:43.014 net/atlantic: not in enabled drivers build config 00:01:43.014 net/avp: not in enabled drivers build config 00:01:43.014 net/axgbe: not in enabled drivers build config 00:01:43.014 net/bnx2x: not in enabled drivers build config 00:01:43.014 net/bnxt: not in enabled drivers build config 00:01:43.014 net/bonding: not in enabled drivers build config 00:01:43.014 net/cnxk: not in enabled drivers build config 00:01:43.014 net/cpfl: not in enabled drivers build config 00:01:43.014 net/cxgbe: not in enabled drivers build config 00:01:43.014 net/dpaa: not in enabled drivers build config 00:01:43.014 net/dpaa2: not in enabled drivers build config 00:01:43.014 net/e1000: not in enabled drivers build config 00:01:43.014 net/ena: not in enabled drivers build config 00:01:43.014 net/enetc: not in enabled drivers build config 00:01:43.014 net/enetfec: not in enabled drivers build config 00:01:43.014 net/enic: not in enabled drivers build config 00:01:43.014 net/failsafe: not in enabled drivers build config 00:01:43.014 net/fm10k: not in enabled drivers build config 00:01:43.014 net/gve: not in enabled drivers build config 00:01:43.014 net/hinic: not in enabled drivers build config 00:01:43.014 net/hns3: not in enabled drivers build config 00:01:43.014 net/i40e: not in enabled drivers build config 00:01:43.014 net/iavf: not in enabled drivers build config 00:01:43.014 net/ice: not in enabled drivers build config 00:01:43.014 net/idpf: not in enabled drivers build config 00:01:43.014 net/igc: not in enabled drivers build config 00:01:43.014 net/ionic: not in enabled drivers build config 00:01:43.014 net/ipn3ke: not in enabled drivers build config 00:01:43.014 net/ixgbe: not in enabled drivers build config 00:01:43.014 net/mana: not in enabled drivers build config 00:01:43.014 net/memif: not in enabled drivers build config 00:01:43.014 net/mlx4: not in enabled drivers build config 00:01:43.014 net/mlx5: not in enabled drivers build config 00:01:43.014 net/mvneta: not in enabled drivers build config 00:01:43.014 net/mvpp2: not in enabled drivers build config 00:01:43.014 net/netvsc: not in enabled drivers build config 00:01:43.014 net/nfb: not in enabled drivers build config 00:01:43.014 net/nfp: not in enabled drivers build config 00:01:43.014 net/ngbe: not in enabled drivers build config 00:01:43.014 net/null: not in enabled drivers build config 00:01:43.014 net/octeontx: not in enabled drivers build config 00:01:43.014 net/octeon_ep: not in enabled drivers build config 00:01:43.014 net/pcap: not in enabled drivers build config 00:01:43.014 net/pfe: not in enabled drivers build config 00:01:43.014 net/qede: not in enabled drivers build config 00:01:43.014 net/ring: not in enabled drivers build config 00:01:43.014 net/sfc: not in enabled drivers build config 00:01:43.014 net/softnic: not in enabled drivers build config 00:01:43.014 net/tap: not in enabled drivers build config 00:01:43.014 net/thunderx: not in enabled drivers build config 00:01:43.014 net/txgbe: not in enabled drivers build config 00:01:43.014 net/vdev_netvsc: not in enabled drivers build config 00:01:43.014 net/vhost: not in enabled drivers build config 00:01:43.014 net/virtio: not in enabled drivers build config 00:01:43.014 net/vmxnet3: not in enabled drivers build config 00:01:43.014 raw/*: missing internal dependency, "rawdev" 00:01:43.014 crypto/armv8: not in enabled drivers build config 00:01:43.014 crypto/bcmfs: not in enabled drivers build config 00:01:43.014 crypto/caam_jr: not in enabled drivers build config 00:01:43.014 crypto/ccp: not in enabled drivers build config 00:01:43.014 crypto/cnxk: not in enabled drivers build config 00:01:43.014 crypto/dpaa_sec: not in enabled drivers build config 00:01:43.014 crypto/dpaa2_sec: not in enabled drivers build config 00:01:43.014 crypto/ipsec_mb: not in enabled drivers build config 00:01:43.014 crypto/mlx5: not in enabled drivers build config 00:01:43.014 crypto/mvsam: not in enabled drivers build config 00:01:43.014 crypto/nitrox: not in enabled drivers build config 00:01:43.014 crypto/null: not in enabled drivers build config 00:01:43.014 crypto/octeontx: not in enabled drivers build config 00:01:43.014 crypto/openssl: not in enabled drivers build config 00:01:43.015 crypto/scheduler: not in enabled drivers build config 00:01:43.015 crypto/uadk: not in enabled drivers build config 00:01:43.015 crypto/virtio: not in enabled drivers build config 00:01:43.015 compress/isal: not in enabled drivers build config 00:01:43.015 compress/mlx5: not in enabled drivers build config 00:01:43.015 compress/nitrox: not in enabled drivers build config 00:01:43.015 compress/octeontx: not in enabled drivers build config 00:01:43.015 compress/zlib: not in enabled drivers build config 00:01:43.015 regex/*: missing internal dependency, "regexdev" 00:01:43.015 ml/*: missing internal dependency, "mldev" 00:01:43.015 vdpa/ifc: not in enabled drivers build config 00:01:43.015 vdpa/mlx5: not in enabled drivers build config 00:01:43.015 vdpa/nfp: not in enabled drivers build config 00:01:43.015 vdpa/sfc: not in enabled drivers build config 00:01:43.015 event/*: missing internal dependency, "eventdev" 00:01:43.015 baseband/*: missing internal dependency, "bbdev" 00:01:43.015 gpu/*: missing internal dependency, "gpudev" 00:01:43.015 00:01:43.015 00:01:43.015 Build targets in project: 88 00:01:43.015 00:01:43.015 DPDK 24.03.0 00:01:43.015 00:01:43.015 User defined options 00:01:43.015 buildtype : debug 00:01:43.015 default_library : shared 00:01:43.015 libdir : lib 00:01:43.015 prefix : /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build 00:01:43.015 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:01:43.015 c_link_args : 00:01:43.015 cpu_instruction_set: native 00:01:43.015 disable_apps : test-sad,graph,test-regex,dumpcap,test-eventdev,test-compress-perf,pdump,test-security-perf,test-pmd,test-flow-perf,test-pipeline,test-crypto-perf,test-gpudev,test-cmdline,test-dma-perf,proc-info,test-bbdev,test-acl,test,test-mldev,test-fib 00:01:43.015 disable_libs : sched,port,dispatcher,graph,rawdev,pdcp,bitratestats,ipsec,pcapng,pdump,gso,cfgfile,gpudev,ip_frag,node,distributor,mldev,lpm,acl,bpf,latencystats,eventdev,regexdev,gro,stack,fib,argparse,pipeline,bbdev,table,metrics,member,jobstats,efd,rib 00:01:43.015 enable_docs : false 00:01:43.015 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:01:43.015 enable_kmods : false 00:01:43.015 tests : false 00:01:43.015 00:01:43.015 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:43.015 ninja: Entering directory `/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build-tmp' 00:01:43.015 [1/274] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:43.277 [2/274] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:43.277 [3/274] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:43.277 [4/274] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:43.277 [5/274] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:43.277 [6/274] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:43.277 [7/274] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:43.277 [8/274] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:43.277 [9/274] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:43.277 [10/274] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:43.277 [11/274] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:43.277 [12/274] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:43.277 [13/274] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:43.277 [14/274] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:43.277 [15/274] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:43.277 [16/274] Linking static target lib/librte_kvargs.a 00:01:43.277 [17/274] Linking static target lib/librte_log.a 00:01:43.277 [18/274] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:43.277 [19/274] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:43.536 [20/274] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:43.536 [21/274] Linking static target lib/librte_pci.a 00:01:43.536 [22/274] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:43.536 [23/274] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:43.536 [24/274] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:43.536 [25/274] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:43.536 [26/274] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:43.536 [27/274] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:43.537 [28/274] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:43.537 [29/274] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:43.796 [30/274] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:43.796 [31/274] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:43.796 [32/274] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:43.796 [33/274] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:43.796 [34/274] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:43.796 [35/274] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:43.796 [36/274] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:43.796 [37/274] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:43.796 [38/274] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:43.796 [39/274] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:43.796 [40/274] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:43.796 [41/274] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:43.796 [42/274] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:43.796 [43/274] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:43.796 [44/274] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:43.796 [45/274] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:43.796 [46/274] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:43.796 [47/274] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:43.796 [48/274] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:43.796 [49/274] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:43.796 [50/274] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:43.796 [51/274] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:43.796 [52/274] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:43.796 [53/274] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:43.796 [54/274] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:43.796 [55/274] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:43.796 [56/274] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:43.796 [57/274] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:43.796 [58/274] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:43.796 [59/274] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:43.796 [60/274] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:43.796 [61/274] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:43.796 [62/274] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:43.796 [63/274] Linking static target lib/librte_meter.a 00:01:43.796 [64/274] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:43.796 [65/274] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:43.796 [66/274] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:43.796 [67/274] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:43.796 [68/274] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:01:43.796 [69/274] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:43.796 [70/274] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:43.796 [71/274] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:43.796 [72/274] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:43.796 [73/274] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:43.796 [74/274] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:43.796 [75/274] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:43.796 [76/274] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:43.796 [77/274] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:43.796 [78/274] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:43.796 [79/274] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:43.796 [80/274] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:43.796 [81/274] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:43.796 [82/274] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:43.796 [83/274] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:43.796 [84/274] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:43.796 [85/274] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:43.796 [86/274] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:43.796 [87/274] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:43.796 [88/274] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:43.796 [89/274] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:43.796 [90/274] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:43.796 [91/274] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:43.796 [92/274] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:43.796 [93/274] Linking static target lib/librte_ring.a 00:01:43.796 [94/274] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:43.796 [95/274] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:43.796 [96/274] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:43.796 [97/274] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:43.796 [98/274] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:43.796 [99/274] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:43.796 [100/274] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:43.796 [101/274] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:43.796 [102/274] Linking static target lib/librte_telemetry.a 00:01:43.796 [103/274] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:43.796 [104/274] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:43.796 [105/274] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:43.796 [106/274] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:43.796 [107/274] Linking static target lib/librte_rcu.a 00:01:43.796 [108/274] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:43.796 [109/274] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:43.796 [110/274] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:43.796 [111/274] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:43.796 [112/274] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:43.796 [113/274] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:43.796 [114/274] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:43.796 [115/274] Linking static target lib/librte_mempool.a 00:01:44.054 [116/274] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:44.054 [117/274] Linking static target lib/librte_net.a 00:01:44.054 [118/274] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:44.054 [119/274] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:44.054 [120/274] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:44.054 [121/274] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:44.054 [122/274] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:44.054 [123/274] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:01:44.054 [124/274] Linking static target lib/librte_eal.a 00:01:44.054 [125/274] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:44.054 [126/274] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:44.054 [127/274] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:44.054 [128/274] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:44.054 [129/274] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:44.054 [130/274] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:44.054 [131/274] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:44.054 [132/274] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:44.054 [133/274] Linking static target lib/librte_cmdline.a 00:01:44.054 [134/274] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:44.054 [135/274] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:44.054 [136/274] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:44.054 [137/274] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:44.054 [138/274] Linking static target lib/librte_mbuf.a 00:01:44.054 [139/274] Linking target lib/librte_log.so.24.1 00:01:44.054 [140/274] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:44.054 [141/274] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:44.054 [142/274] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:44.054 [143/274] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:44.054 [144/274] Linking static target lib/librte_timer.a 00:01:44.054 [145/274] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:44.054 [146/274] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:44.054 [147/274] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:44.054 [148/274] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:01:44.054 [149/274] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:44.312 [150/274] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:44.312 [151/274] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:44.312 [152/274] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:01:44.312 [153/274] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:44.312 [154/274] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:01:44.312 [155/274] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:44.312 [156/274] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:44.312 [157/274] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:44.312 [158/274] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:44.312 [159/274] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:44.312 [160/274] Linking target lib/librte_kvargs.so.24.1 00:01:44.312 [161/274] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:01:44.312 [162/274] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:44.312 [163/274] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:44.312 [164/274] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:44.312 [165/274] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:44.312 [166/274] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:44.312 [167/274] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:44.312 [168/274] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:44.312 [169/274] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:44.312 [170/274] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:44.312 [171/274] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:44.312 [172/274] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:44.312 [173/274] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:44.312 [174/274] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:44.312 [175/274] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:01:44.312 [176/274] Linking static target lib/librte_security.a 00:01:44.312 [177/274] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:44.312 [178/274] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:44.312 [179/274] Linking static target lib/librte_stack.a 00:01:44.312 [180/274] Linking static target lib/librte_power.a 00:01:44.312 [181/274] Linking static target lib/librte_dmadev.a 00:01:44.313 [182/274] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:44.313 [183/274] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:44.313 [184/274] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:44.313 [185/274] Linking target lib/librte_telemetry.so.24.1 00:01:44.313 [186/274] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:44.313 [187/274] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:44.313 [188/274] Linking static target lib/librte_compressdev.a 00:01:44.313 [189/274] Linking static target lib/librte_reorder.a 00:01:44.313 [190/274] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:44.313 [191/274] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:01:44.313 [192/274] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:44.313 [193/274] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:44.313 [194/274] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:44.313 [195/274] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:44.571 [196/274] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:44.571 [197/274] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:44.571 [198/274] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:44.571 [199/274] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:44.571 [200/274] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:44.571 [201/274] Linking static target lib/librte_hash.a 00:01:44.571 [202/274] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:44.571 [203/274] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:44.571 [204/274] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:01:44.571 [205/274] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:44.571 [206/274] Linking static target drivers/librte_bus_vdev.a 00:01:44.571 [207/274] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:44.571 [208/274] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:44.571 [209/274] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:44.571 [210/274] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:44.571 [211/274] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:01:44.571 [212/274] Linking static target drivers/librte_mempool_ring.a 00:01:44.571 [213/274] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:44.571 [214/274] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:44.571 [215/274] Linking static target lib/librte_cryptodev.a 00:01:44.571 [216/274] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:44.571 [217/274] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:44.571 [218/274] Linking static target drivers/librte_bus_pci.a 00:01:44.571 [219/274] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:44.829 [220/274] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:44.829 [221/274] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:44.829 [222/274] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:44.829 [223/274] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:45.087 [224/274] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:45.087 [225/274] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:45.087 [226/274] Linking static target lib/librte_ethdev.a 00:01:45.087 [227/274] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:45.087 [228/274] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:45.087 [229/274] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:45.345 [230/274] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:45.345 [231/274] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:45.345 [232/274] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.279 [233/274] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:46.279 [234/274] Linking static target lib/librte_vhost.a 00:01:46.537 [235/274] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.909 [236/274] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.172 [237/274] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.738 [238/274] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.996 [239/274] Linking target lib/librte_eal.so.24.1 00:01:53.996 [240/274] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:01:53.996 [241/274] Linking target lib/librte_ring.so.24.1 00:01:53.996 [242/274] Linking target lib/librte_pci.so.24.1 00:01:53.996 [243/274] Linking target drivers/librte_bus_vdev.so.24.1 00:01:53.996 [244/274] Linking target lib/librte_meter.so.24.1 00:01:53.996 [245/274] Linking target lib/librte_dmadev.so.24.1 00:01:53.996 [246/274] Linking target lib/librte_timer.so.24.1 00:01:53.996 [247/274] Linking target lib/librte_stack.so.24.1 00:01:54.253 [248/274] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:01:54.253 [249/274] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:01:54.253 [250/274] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:01:54.253 [251/274] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:01:54.253 [252/274] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:01:54.253 [253/274] Linking target lib/librte_rcu.so.24.1 00:01:54.253 [254/274] Linking target lib/librte_mempool.so.24.1 00:01:54.253 [255/274] Linking target drivers/librte_bus_pci.so.24.1 00:01:54.253 [256/274] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:01:54.253 [257/274] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:01:54.511 [258/274] Linking target lib/librte_mbuf.so.24.1 00:01:54.511 [259/274] Linking target drivers/librte_mempool_ring.so.24.1 00:01:54.511 [260/274] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:01:54.511 [261/274] Linking target lib/librte_compressdev.so.24.1 00:01:54.511 [262/274] Linking target lib/librte_reorder.so.24.1 00:01:54.511 [263/274] Linking target lib/librte_net.so.24.1 00:01:54.511 [264/274] Linking target lib/librte_cryptodev.so.24.1 00:01:54.770 [265/274] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:01:54.770 [266/274] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:01:54.770 [267/274] Linking target lib/librte_hash.so.24.1 00:01:54.770 [268/274] Linking target lib/librte_cmdline.so.24.1 00:01:54.770 [269/274] Linking target lib/librte_security.so.24.1 00:01:54.770 [270/274] Linking target lib/librte_ethdev.so.24.1 00:01:54.770 [271/274] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:01:54.770 [272/274] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:01:55.028 [273/274] Linking target lib/librte_power.so.24.1 00:01:55.028 [274/274] Linking target lib/librte_vhost.so.24.1 00:01:55.028 INFO: autodetecting backend as ninja 00:01:55.028 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build-tmp -j 96 00:01:55.959 CC lib/log/log.o 00:01:55.959 CC lib/ut_mock/mock.o 00:01:55.959 CC lib/log/log_flags.o 00:01:55.959 CC lib/log/log_deprecated.o 00:01:55.959 CC lib/ut/ut.o 00:01:55.959 LIB libspdk_log.a 00:01:55.959 LIB libspdk_ut_mock.a 00:01:55.959 LIB libspdk_ut.a 00:01:55.959 SO libspdk_log.so.7.0 00:01:56.216 SO libspdk_ut.so.2.0 00:01:56.216 SO libspdk_ut_mock.so.6.0 00:01:56.216 SYMLINK libspdk_log.so 00:01:56.216 SYMLINK libspdk_ut.so 00:01:56.217 SYMLINK libspdk_ut_mock.so 00:01:56.475 CC lib/dma/dma.o 00:01:56.475 CC lib/util/base64.o 00:01:56.475 CC lib/ioat/ioat.o 00:01:56.475 CC lib/util/bit_array.o 00:01:56.475 CC lib/util/crc32.o 00:01:56.475 CC lib/util/cpuset.o 00:01:56.475 CC lib/util/crc32c.o 00:01:56.475 CC lib/util/crc16.o 00:01:56.475 CXX lib/trace_parser/trace.o 00:01:56.475 CC lib/util/crc32_ieee.o 00:01:56.475 CC lib/util/crc64.o 00:01:56.475 CC lib/util/file.o 00:01:56.475 CC lib/util/dif.o 00:01:56.475 CC lib/util/fd.o 00:01:56.475 CC lib/util/hexlify.o 00:01:56.475 CC lib/util/iov.o 00:01:56.475 CC lib/util/strerror_tls.o 00:01:56.475 CC lib/util/math.o 00:01:56.475 CC lib/util/pipe.o 00:01:56.475 CC lib/util/string.o 00:01:56.475 CC lib/util/uuid.o 00:01:56.475 CC lib/util/fd_group.o 00:01:56.475 CC lib/util/xor.o 00:01:56.475 CC lib/util/zipf.o 00:01:56.475 CC lib/vfio_user/host/vfio_user_pci.o 00:01:56.475 CC lib/vfio_user/host/vfio_user.o 00:01:56.734 LIB libspdk_dma.a 00:01:56.734 SO libspdk_dma.so.4.0 00:01:56.734 LIB libspdk_ioat.a 00:01:56.734 SYMLINK libspdk_dma.so 00:01:56.734 SO libspdk_ioat.so.7.0 00:01:56.734 LIB libspdk_vfio_user.a 00:01:56.734 SYMLINK libspdk_ioat.so 00:01:56.734 SO libspdk_vfio_user.so.5.0 00:01:56.734 SYMLINK libspdk_vfio_user.so 00:01:56.993 LIB libspdk_util.a 00:01:56.993 SO libspdk_util.so.9.0 00:01:56.993 SYMLINK libspdk_util.so 00:01:56.993 LIB libspdk_trace_parser.a 00:01:57.252 SO libspdk_trace_parser.so.5.0 00:01:57.252 SYMLINK libspdk_trace_parser.so 00:01:57.252 CC lib/vmd/vmd.o 00:01:57.252 CC lib/vmd/led.o 00:01:57.252 CC lib/conf/conf.o 00:01:57.252 CC lib/json/json_util.o 00:01:57.252 CC lib/json/json_parse.o 00:01:57.252 CC lib/json/json_write.o 00:01:57.252 CC lib/env_dpdk/env.o 00:01:57.252 CC lib/env_dpdk/memory.o 00:01:57.252 CC lib/env_dpdk/pci.o 00:01:57.252 CC lib/env_dpdk/init.o 00:01:57.252 CC lib/env_dpdk/threads.o 00:01:57.252 CC lib/env_dpdk/pci_ioat.o 00:01:57.252 CC lib/env_dpdk/pci_virtio.o 00:01:57.252 CC lib/env_dpdk/pci_vmd.o 00:01:57.252 CC lib/rdma/common.o 00:01:57.252 CC lib/rdma/rdma_verbs.o 00:01:57.252 CC lib/env_dpdk/pci_idxd.o 00:01:57.252 CC lib/env_dpdk/pci_event.o 00:01:57.252 CC lib/env_dpdk/sigbus_handler.o 00:01:57.252 CC lib/env_dpdk/pci_dpdk.o 00:01:57.252 CC lib/env_dpdk/pci_dpdk_2207.o 00:01:57.252 CC lib/env_dpdk/pci_dpdk_2211.o 00:01:57.252 CC lib/idxd/idxd.o 00:01:57.252 CC lib/idxd/idxd_user.o 00:01:57.509 LIB libspdk_conf.a 00:01:57.509 LIB libspdk_rdma.a 00:01:57.509 SO libspdk_conf.so.6.0 00:01:57.509 LIB libspdk_json.a 00:01:57.509 SO libspdk_rdma.so.6.0 00:01:57.768 SO libspdk_json.so.6.0 00:01:57.769 SYMLINK libspdk_conf.so 00:01:57.769 SYMLINK libspdk_rdma.so 00:01:57.769 SYMLINK libspdk_json.so 00:01:57.769 LIB libspdk_idxd.a 00:01:57.769 LIB libspdk_vmd.a 00:01:57.769 SO libspdk_idxd.so.12.0 00:01:57.769 SO libspdk_vmd.so.6.0 00:01:57.769 SYMLINK libspdk_idxd.so 00:01:58.027 SYMLINK libspdk_vmd.so 00:01:58.027 CC lib/jsonrpc/jsonrpc_server.o 00:01:58.027 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:01:58.027 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:01:58.027 CC lib/jsonrpc/jsonrpc_client.o 00:01:58.284 LIB libspdk_jsonrpc.a 00:01:58.284 SO libspdk_jsonrpc.so.6.0 00:01:58.284 SYMLINK libspdk_jsonrpc.so 00:01:58.284 LIB libspdk_env_dpdk.a 00:01:58.284 SO libspdk_env_dpdk.so.14.0 00:01:58.543 SYMLINK libspdk_env_dpdk.so 00:01:58.543 CC lib/rpc/rpc.o 00:01:58.802 LIB libspdk_rpc.a 00:01:58.802 SO libspdk_rpc.so.6.0 00:01:58.802 SYMLINK libspdk_rpc.so 00:01:59.061 CC lib/trace/trace_flags.o 00:01:59.061 CC lib/trace/trace.o 00:01:59.061 CC lib/trace/trace_rpc.o 00:01:59.061 CC lib/notify/notify.o 00:01:59.061 CC lib/notify/notify_rpc.o 00:01:59.061 CC lib/keyring/keyring.o 00:01:59.061 CC lib/keyring/keyring_rpc.o 00:01:59.320 LIB libspdk_notify.a 00:01:59.320 SO libspdk_notify.so.6.0 00:01:59.320 LIB libspdk_trace.a 00:01:59.320 LIB libspdk_keyring.a 00:01:59.320 SYMLINK libspdk_notify.so 00:01:59.320 SO libspdk_trace.so.10.0 00:01:59.320 SO libspdk_keyring.so.1.0 00:01:59.578 SYMLINK libspdk_keyring.so 00:01:59.578 SYMLINK libspdk_trace.so 00:01:59.837 CC lib/sock/sock.o 00:01:59.837 CC lib/sock/sock_rpc.o 00:01:59.837 CC lib/thread/thread.o 00:01:59.837 CC lib/thread/iobuf.o 00:02:00.122 LIB libspdk_sock.a 00:02:00.122 SO libspdk_sock.so.9.0 00:02:00.122 SYMLINK libspdk_sock.so 00:02:00.410 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:00.410 CC lib/nvme/nvme_ctrlr.o 00:02:00.410 CC lib/nvme/nvme_fabric.o 00:02:00.410 CC lib/nvme/nvme_ns_cmd.o 00:02:00.410 CC lib/nvme/nvme_ns.o 00:02:00.410 CC lib/nvme/nvme_pcie_common.o 00:02:00.410 CC lib/nvme/nvme_pcie.o 00:02:00.410 CC lib/nvme/nvme_qpair.o 00:02:00.410 CC lib/nvme/nvme.o 00:02:00.410 CC lib/nvme/nvme_discovery.o 00:02:00.410 CC lib/nvme/nvme_quirks.o 00:02:00.410 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:00.410 CC lib/nvme/nvme_transport.o 00:02:00.410 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:00.410 CC lib/nvme/nvme_tcp.o 00:02:00.410 CC lib/nvme/nvme_opal.o 00:02:00.410 CC lib/nvme/nvme_io_msg.o 00:02:00.410 CC lib/nvme/nvme_poll_group.o 00:02:00.410 CC lib/nvme/nvme_zns.o 00:02:00.410 CC lib/nvme/nvme_stubs.o 00:02:00.410 CC lib/nvme/nvme_auth.o 00:02:00.410 CC lib/nvme/nvme_cuse.o 00:02:00.410 CC lib/nvme/nvme_rdma.o 00:02:00.669 LIB libspdk_thread.a 00:02:00.927 SO libspdk_thread.so.10.0 00:02:00.927 SYMLINK libspdk_thread.so 00:02:01.184 CC lib/accel/accel.o 00:02:01.184 CC lib/accel/accel_rpc.o 00:02:01.184 CC lib/accel/accel_sw.o 00:02:01.184 CC lib/init/json_config.o 00:02:01.184 CC lib/init/subsystem.o 00:02:01.184 CC lib/init/subsystem_rpc.o 00:02:01.184 CC lib/virtio/virtio_vfio_user.o 00:02:01.184 CC lib/virtio/virtio.o 00:02:01.184 CC lib/init/rpc.o 00:02:01.184 CC lib/virtio/virtio_pci.o 00:02:01.184 CC lib/virtio/virtio_vhost_user.o 00:02:01.184 CC lib/blob/blobstore.o 00:02:01.184 CC lib/blob/request.o 00:02:01.184 CC lib/blob/zeroes.o 00:02:01.184 CC lib/blob/blob_bs_dev.o 00:02:01.442 LIB libspdk_init.a 00:02:01.442 SO libspdk_init.so.5.0 00:02:01.442 LIB libspdk_virtio.a 00:02:01.442 SO libspdk_virtio.so.7.0 00:02:01.442 SYMLINK libspdk_init.so 00:02:01.442 SYMLINK libspdk_virtio.so 00:02:01.700 CC lib/event/app.o 00:02:01.700 CC lib/event/reactor.o 00:02:01.700 CC lib/event/app_rpc.o 00:02:01.700 CC lib/event/log_rpc.o 00:02:01.700 CC lib/event/scheduler_static.o 00:02:01.958 LIB libspdk_accel.a 00:02:01.958 SO libspdk_accel.so.15.0 00:02:01.958 LIB libspdk_nvme.a 00:02:01.958 SYMLINK libspdk_accel.so 00:02:01.958 SO libspdk_nvme.so.13.0 00:02:02.216 LIB libspdk_event.a 00:02:02.216 SO libspdk_event.so.13.0 00:02:02.216 SYMLINK libspdk_event.so 00:02:02.216 CC lib/bdev/bdev.o 00:02:02.216 CC lib/bdev/bdev_rpc.o 00:02:02.216 CC lib/bdev/bdev_zone.o 00:02:02.216 CC lib/bdev/part.o 00:02:02.216 CC lib/bdev/scsi_nvme.o 00:02:02.216 SYMLINK libspdk_nvme.so 00:02:03.176 LIB libspdk_blob.a 00:02:03.176 SO libspdk_blob.so.11.0 00:02:03.434 SYMLINK libspdk_blob.so 00:02:03.692 CC lib/blobfs/blobfs.o 00:02:03.692 CC lib/blobfs/tree.o 00:02:03.692 CC lib/lvol/lvol.o 00:02:03.949 LIB libspdk_bdev.a 00:02:03.949 SO libspdk_bdev.so.15.0 00:02:04.207 SYMLINK libspdk_bdev.so 00:02:04.207 LIB libspdk_blobfs.a 00:02:04.207 SO libspdk_blobfs.so.10.0 00:02:04.207 SYMLINK libspdk_blobfs.so 00:02:04.464 LIB libspdk_lvol.a 00:02:04.464 CC lib/nbd/nbd.o 00:02:04.464 CC lib/nbd/nbd_rpc.o 00:02:04.464 CC lib/ublk/ublk.o 00:02:04.464 SO libspdk_lvol.so.10.0 00:02:04.464 CC lib/ublk/ublk_rpc.o 00:02:04.464 CC lib/nvmf/ctrlr.o 00:02:04.464 CC lib/nvmf/ctrlr_discovery.o 00:02:04.464 CC lib/nvmf/subsystem.o 00:02:04.464 CC lib/nvmf/ctrlr_bdev.o 00:02:04.464 CC lib/nvmf/nvmf.o 00:02:04.464 CC lib/nvmf/nvmf_rpc.o 00:02:04.464 CC lib/ftl/ftl_core.o 00:02:04.464 CC lib/nvmf/transport.o 00:02:04.464 CC lib/ftl/ftl_init.o 00:02:04.464 CC lib/ftl/ftl_debug.o 00:02:04.464 CC lib/scsi/dev.o 00:02:04.464 CC lib/ftl/ftl_layout.o 00:02:04.464 CC lib/nvmf/tcp.o 00:02:04.464 CC lib/scsi/lun.o 00:02:04.464 CC lib/nvmf/stubs.o 00:02:04.464 CC lib/scsi/port.o 00:02:04.464 CC lib/scsi/scsi_bdev.o 00:02:04.464 CC lib/ftl/ftl_io.o 00:02:04.464 CC lib/scsi/scsi.o 00:02:04.464 CC lib/nvmf/mdns_server.o 00:02:04.464 CC lib/scsi/scsi_pr.o 00:02:04.464 CC lib/ftl/ftl_sb.o 00:02:04.464 CC lib/ftl/ftl_l2p_flat.o 00:02:04.464 CC lib/nvmf/rdma.o 00:02:04.464 CC lib/ftl/ftl_l2p.o 00:02:04.464 CC lib/nvmf/auth.o 00:02:04.464 CC lib/scsi/scsi_rpc.o 00:02:04.464 CC lib/ftl/ftl_nv_cache.o 00:02:04.464 CC lib/scsi/task.o 00:02:04.464 CC lib/ftl/ftl_band.o 00:02:04.464 CC lib/ftl/ftl_band_ops.o 00:02:04.464 CC lib/ftl/ftl_writer.o 00:02:04.464 CC lib/ftl/ftl_rq.o 00:02:04.464 CC lib/ftl/ftl_p2l.o 00:02:04.464 CC lib/ftl/ftl_reloc.o 00:02:04.464 CC lib/ftl/ftl_l2p_cache.o 00:02:04.464 CC lib/ftl/mngt/ftl_mngt.o 00:02:04.464 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:04.464 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:04.464 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:04.464 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:04.464 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:04.464 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:04.464 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:04.464 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:04.464 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:04.464 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:04.464 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:04.464 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:04.464 CC lib/ftl/utils/ftl_md.o 00:02:04.464 CC lib/ftl/utils/ftl_conf.o 00:02:04.464 CC lib/ftl/utils/ftl_mempool.o 00:02:04.464 CC lib/ftl/utils/ftl_bitmap.o 00:02:04.464 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:04.464 CC lib/ftl/utils/ftl_property.o 00:02:04.464 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:04.464 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:04.464 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:04.464 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:04.464 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:04.464 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:04.464 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:04.464 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:04.464 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:04.464 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:04.464 CC lib/ftl/ftl_trace.o 00:02:04.464 CC lib/ftl/base/ftl_base_dev.o 00:02:04.464 CC lib/ftl/base/ftl_base_bdev.o 00:02:04.464 SYMLINK libspdk_lvol.so 00:02:05.029 LIB libspdk_nbd.a 00:02:05.029 LIB libspdk_scsi.a 00:02:05.029 SO libspdk_nbd.so.7.0 00:02:05.029 SO libspdk_scsi.so.9.0 00:02:05.029 LIB libspdk_ublk.a 00:02:05.029 SYMLINK libspdk_nbd.so 00:02:05.029 SO libspdk_ublk.so.3.0 00:02:05.029 SYMLINK libspdk_scsi.so 00:02:05.029 SYMLINK libspdk_ublk.so 00:02:05.287 CC lib/iscsi/conn.o 00:02:05.287 CC lib/iscsi/init_grp.o 00:02:05.287 CC lib/iscsi/iscsi.o 00:02:05.287 CC lib/iscsi/md5.o 00:02:05.287 CC lib/iscsi/param.o 00:02:05.287 CC lib/iscsi/portal_grp.o 00:02:05.287 CC lib/iscsi/tgt_node.o 00:02:05.287 CC lib/iscsi/iscsi_subsystem.o 00:02:05.287 CC lib/iscsi/iscsi_rpc.o 00:02:05.287 CC lib/iscsi/task.o 00:02:05.287 CC lib/vhost/vhost.o 00:02:05.287 CC lib/vhost/vhost_rpc.o 00:02:05.287 CC lib/vhost/vhost_scsi.o 00:02:05.287 CC lib/vhost/vhost_blk.o 00:02:05.287 CC lib/vhost/rte_vhost_user.o 00:02:05.287 LIB libspdk_ftl.a 00:02:05.545 SO libspdk_ftl.so.9.0 00:02:05.803 SYMLINK libspdk_ftl.so 00:02:06.062 LIB libspdk_nvmf.a 00:02:06.062 LIB libspdk_vhost.a 00:02:06.062 SO libspdk_nvmf.so.18.0 00:02:06.062 SO libspdk_vhost.so.8.0 00:02:06.322 SYMLINK libspdk_vhost.so 00:02:06.322 SYMLINK libspdk_nvmf.so 00:02:06.322 LIB libspdk_iscsi.a 00:02:06.322 SO libspdk_iscsi.so.8.0 00:02:06.581 SYMLINK libspdk_iscsi.so 00:02:06.840 CC module/env_dpdk/env_dpdk_rpc.o 00:02:07.098 LIB libspdk_env_dpdk_rpc.a 00:02:07.098 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:07.098 CC module/scheduler/gscheduler/gscheduler.o 00:02:07.098 CC module/blob/bdev/blob_bdev.o 00:02:07.098 CC module/sock/posix/posix.o 00:02:07.098 CC module/accel/dsa/accel_dsa.o 00:02:07.098 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:07.098 CC module/accel/dsa/accel_dsa_rpc.o 00:02:07.098 CC module/keyring/file/keyring_rpc.o 00:02:07.098 CC module/keyring/file/keyring.o 00:02:07.098 CC module/accel/ioat/accel_ioat.o 00:02:07.098 CC module/accel/ioat/accel_ioat_rpc.o 00:02:07.098 CC module/accel/iaa/accel_iaa.o 00:02:07.098 CC module/accel/iaa/accel_iaa_rpc.o 00:02:07.098 CC module/accel/error/accel_error.o 00:02:07.098 CC module/accel/error/accel_error_rpc.o 00:02:07.098 SO libspdk_env_dpdk_rpc.so.6.0 00:02:07.098 SYMLINK libspdk_env_dpdk_rpc.so 00:02:07.098 LIB libspdk_scheduler_dpdk_governor.a 00:02:07.098 LIB libspdk_scheduler_gscheduler.a 00:02:07.356 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:07.356 LIB libspdk_keyring_file.a 00:02:07.356 SO libspdk_scheduler_gscheduler.so.4.0 00:02:07.356 SO libspdk_keyring_file.so.1.0 00:02:07.356 LIB libspdk_accel_error.a 00:02:07.356 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:07.356 LIB libspdk_scheduler_dynamic.a 00:02:07.356 LIB libspdk_accel_ioat.a 00:02:07.356 LIB libspdk_accel_iaa.a 00:02:07.356 SO libspdk_accel_error.so.2.0 00:02:07.356 SO libspdk_accel_ioat.so.6.0 00:02:07.356 SO libspdk_scheduler_dynamic.so.4.0 00:02:07.356 SYMLINK libspdk_scheduler_gscheduler.so 00:02:07.356 SO libspdk_accel_iaa.so.3.0 00:02:07.356 LIB libspdk_blob_bdev.a 00:02:07.356 LIB libspdk_accel_dsa.a 00:02:07.356 SYMLINK libspdk_keyring_file.so 00:02:07.356 SYMLINK libspdk_accel_error.so 00:02:07.356 SYMLINK libspdk_accel_ioat.so 00:02:07.356 SO libspdk_blob_bdev.so.11.0 00:02:07.356 SO libspdk_accel_dsa.so.5.0 00:02:07.356 SYMLINK libspdk_scheduler_dynamic.so 00:02:07.356 SYMLINK libspdk_accel_iaa.so 00:02:07.356 SYMLINK libspdk_blob_bdev.so 00:02:07.356 SYMLINK libspdk_accel_dsa.so 00:02:07.615 LIB libspdk_sock_posix.a 00:02:07.615 SO libspdk_sock_posix.so.6.0 00:02:07.872 SYMLINK libspdk_sock_posix.so 00:02:07.872 CC module/bdev/gpt/gpt.o 00:02:07.872 CC module/bdev/gpt/vbdev_gpt.o 00:02:07.872 CC module/bdev/malloc/bdev_malloc.o 00:02:07.872 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:07.872 CC module/bdev/split/vbdev_split.o 00:02:07.872 CC module/bdev/split/vbdev_split_rpc.o 00:02:07.872 CC module/bdev/lvol/vbdev_lvol.o 00:02:07.872 CC module/bdev/delay/vbdev_delay.o 00:02:07.872 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:07.872 CC module/bdev/nvme/bdev_nvme.o 00:02:07.872 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:07.872 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:07.872 CC module/bdev/nvme/nvme_rpc.o 00:02:07.872 CC module/bdev/nvme/vbdev_opal.o 00:02:07.872 CC module/bdev/aio/bdev_aio.o 00:02:07.872 CC module/bdev/nvme/bdev_mdns_client.o 00:02:07.873 CC module/bdev/aio/bdev_aio_rpc.o 00:02:07.873 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:07.873 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:07.873 CC module/bdev/raid/bdev_raid.o 00:02:07.873 CC module/bdev/raid/bdev_raid_rpc.o 00:02:07.873 CC module/blobfs/bdev/blobfs_bdev.o 00:02:07.873 CC module/bdev/raid/bdev_raid_sb.o 00:02:07.873 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:07.873 CC module/bdev/raid/concat.o 00:02:07.873 CC module/bdev/raid/raid0.o 00:02:07.873 CC module/bdev/passthru/vbdev_passthru.o 00:02:07.873 CC module/bdev/error/vbdev_error.o 00:02:07.873 CC module/bdev/raid/raid1.o 00:02:07.873 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:07.873 CC module/bdev/error/vbdev_error_rpc.o 00:02:07.873 CC module/bdev/null/bdev_null.o 00:02:07.873 CC module/bdev/null/bdev_null_rpc.o 00:02:07.873 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:07.873 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:07.873 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:07.873 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:07.873 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:07.873 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:07.873 CC module/bdev/iscsi/bdev_iscsi.o 00:02:07.873 CC module/bdev/ftl/bdev_ftl.o 00:02:07.873 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:08.131 LIB libspdk_blobfs_bdev.a 00:02:08.131 SO libspdk_blobfs_bdev.so.6.0 00:02:08.131 LIB libspdk_bdev_split.a 00:02:08.131 LIB libspdk_bdev_gpt.a 00:02:08.131 LIB libspdk_bdev_null.a 00:02:08.131 SO libspdk_bdev_split.so.6.0 00:02:08.131 LIB libspdk_bdev_error.a 00:02:08.131 SO libspdk_bdev_gpt.so.6.0 00:02:08.131 SYMLINK libspdk_blobfs_bdev.so 00:02:08.131 SO libspdk_bdev_null.so.6.0 00:02:08.131 SO libspdk_bdev_error.so.6.0 00:02:08.131 LIB libspdk_bdev_passthru.a 00:02:08.131 LIB libspdk_bdev_ftl.a 00:02:08.131 SYMLINK libspdk_bdev_split.so 00:02:08.131 LIB libspdk_bdev_aio.a 00:02:08.131 LIB libspdk_bdev_malloc.a 00:02:08.131 SO libspdk_bdev_passthru.so.6.0 00:02:08.131 LIB libspdk_bdev_zone_block.a 00:02:08.131 SYMLINK libspdk_bdev_gpt.so 00:02:08.131 SO libspdk_bdev_ftl.so.6.0 00:02:08.131 SYMLINK libspdk_bdev_null.so 00:02:08.132 LIB libspdk_bdev_delay.a 00:02:08.390 SYMLINK libspdk_bdev_error.so 00:02:08.390 LIB libspdk_bdev_iscsi.a 00:02:08.390 SO libspdk_bdev_aio.so.6.0 00:02:08.390 SO libspdk_bdev_malloc.so.6.0 00:02:08.390 SO libspdk_bdev_zone_block.so.6.0 00:02:08.390 SYMLINK libspdk_bdev_passthru.so 00:02:08.390 SO libspdk_bdev_delay.so.6.0 00:02:08.390 SO libspdk_bdev_iscsi.so.6.0 00:02:08.390 SYMLINK libspdk_bdev_ftl.so 00:02:08.390 SYMLINK libspdk_bdev_zone_block.so 00:02:08.390 SYMLINK libspdk_bdev_aio.so 00:02:08.390 SYMLINK libspdk_bdev_malloc.so 00:02:08.390 SYMLINK libspdk_bdev_delay.so 00:02:08.390 LIB libspdk_bdev_virtio.a 00:02:08.390 LIB libspdk_bdev_lvol.a 00:02:08.390 SYMLINK libspdk_bdev_iscsi.so 00:02:08.390 SO libspdk_bdev_virtio.so.6.0 00:02:08.390 SO libspdk_bdev_lvol.so.6.0 00:02:08.390 SYMLINK libspdk_bdev_lvol.so 00:02:08.390 SYMLINK libspdk_bdev_virtio.so 00:02:08.648 LIB libspdk_bdev_raid.a 00:02:08.648 SO libspdk_bdev_raid.so.6.0 00:02:08.907 SYMLINK libspdk_bdev_raid.so 00:02:09.473 LIB libspdk_bdev_nvme.a 00:02:09.473 SO libspdk_bdev_nvme.so.7.0 00:02:09.732 SYMLINK libspdk_bdev_nvme.so 00:02:10.298 CC module/event/subsystems/scheduler/scheduler.o 00:02:10.298 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:10.298 CC module/event/subsystems/sock/sock.o 00:02:10.298 CC module/event/subsystems/vmd/vmd.o 00:02:10.298 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:10.298 CC module/event/subsystems/iobuf/iobuf.o 00:02:10.298 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:10.298 CC module/event/subsystems/keyring/keyring.o 00:02:10.298 LIB libspdk_event_vhost_blk.a 00:02:10.298 LIB libspdk_event_scheduler.a 00:02:10.298 LIB libspdk_event_sock.a 00:02:10.298 SO libspdk_event_vhost_blk.so.3.0 00:02:10.298 SO libspdk_event_scheduler.so.4.0 00:02:10.298 LIB libspdk_event_keyring.a 00:02:10.298 LIB libspdk_event_vmd.a 00:02:10.298 SO libspdk_event_sock.so.5.0 00:02:10.298 LIB libspdk_event_iobuf.a 00:02:10.298 SO libspdk_event_keyring.so.1.0 00:02:10.298 SO libspdk_event_vmd.so.6.0 00:02:10.298 SYMLINK libspdk_event_vhost_blk.so 00:02:10.298 SO libspdk_event_iobuf.so.3.0 00:02:10.298 SYMLINK libspdk_event_sock.so 00:02:10.298 SYMLINK libspdk_event_scheduler.so 00:02:10.557 SYMLINK libspdk_event_vmd.so 00:02:10.557 SYMLINK libspdk_event_keyring.so 00:02:10.557 SYMLINK libspdk_event_iobuf.so 00:02:10.815 CC module/event/subsystems/accel/accel.o 00:02:10.815 LIB libspdk_event_accel.a 00:02:10.815 SO libspdk_event_accel.so.6.0 00:02:10.815 SYMLINK libspdk_event_accel.so 00:02:11.074 CC module/event/subsystems/bdev/bdev.o 00:02:11.332 LIB libspdk_event_bdev.a 00:02:11.332 SO libspdk_event_bdev.so.6.0 00:02:11.332 SYMLINK libspdk_event_bdev.so 00:02:11.590 CC module/event/subsystems/scsi/scsi.o 00:02:11.590 CC module/event/subsystems/ublk/ublk.o 00:02:11.590 CC module/event/subsystems/nbd/nbd.o 00:02:11.590 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:11.590 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:11.849 LIB libspdk_event_ublk.a 00:02:11.849 LIB libspdk_event_scsi.a 00:02:11.849 LIB libspdk_event_nbd.a 00:02:11.849 SO libspdk_event_ublk.so.3.0 00:02:11.849 SO libspdk_event_scsi.so.6.0 00:02:11.849 SO libspdk_event_nbd.so.6.0 00:02:11.849 LIB libspdk_event_nvmf.a 00:02:11.849 SYMLINK libspdk_event_ublk.so 00:02:11.849 SYMLINK libspdk_event_scsi.so 00:02:11.849 SO libspdk_event_nvmf.so.6.0 00:02:11.849 SYMLINK libspdk_event_nbd.so 00:02:12.108 SYMLINK libspdk_event_nvmf.so 00:02:12.108 CC module/event/subsystems/iscsi/iscsi.o 00:02:12.108 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:12.366 LIB libspdk_event_iscsi.a 00:02:12.366 LIB libspdk_event_vhost_scsi.a 00:02:12.366 SO libspdk_event_iscsi.so.6.0 00:02:12.366 SO libspdk_event_vhost_scsi.so.3.0 00:02:12.366 SYMLINK libspdk_event_iscsi.so 00:02:12.366 SYMLINK libspdk_event_vhost_scsi.so 00:02:12.625 SO libspdk.so.6.0 00:02:12.625 SYMLINK libspdk.so 00:02:12.891 CXX app/trace/trace.o 00:02:12.891 CC app/spdk_top/spdk_top.o 00:02:12.891 CC app/trace_record/trace_record.o 00:02:12.891 TEST_HEADER include/spdk/accel.h 00:02:12.891 CC app/spdk_nvme_perf/perf.o 00:02:12.891 TEST_HEADER include/spdk/assert.h 00:02:12.891 TEST_HEADER include/spdk/barrier.h 00:02:12.891 CC app/spdk_nvme_discover/discovery_aer.o 00:02:12.891 CC test/rpc_client/rpc_client_test.o 00:02:12.891 TEST_HEADER include/spdk/base64.h 00:02:12.891 TEST_HEADER include/spdk/bdev.h 00:02:12.891 TEST_HEADER include/spdk/accel_module.h 00:02:12.891 TEST_HEADER include/spdk/bdev_module.h 00:02:12.891 CC app/spdk_nvme_identify/identify.o 00:02:12.891 TEST_HEADER include/spdk/bdev_zone.h 00:02:12.891 TEST_HEADER include/spdk/bit_array.h 00:02:12.891 TEST_HEADER include/spdk/blob_bdev.h 00:02:12.891 TEST_HEADER include/spdk/bit_pool.h 00:02:12.891 TEST_HEADER include/spdk/blobfs.h 00:02:12.891 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:12.891 TEST_HEADER include/spdk/blob.h 00:02:12.891 TEST_HEADER include/spdk/conf.h 00:02:12.891 TEST_HEADER include/spdk/cpuset.h 00:02:12.891 TEST_HEADER include/spdk/config.h 00:02:12.891 TEST_HEADER include/spdk/crc16.h 00:02:12.891 TEST_HEADER include/spdk/crc32.h 00:02:12.891 TEST_HEADER include/spdk/crc64.h 00:02:12.891 TEST_HEADER include/spdk/dif.h 00:02:12.891 TEST_HEADER include/spdk/dma.h 00:02:12.891 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:12.891 TEST_HEADER include/spdk/endian.h 00:02:12.891 TEST_HEADER include/spdk/env_dpdk.h 00:02:12.891 TEST_HEADER include/spdk/env.h 00:02:12.891 TEST_HEADER include/spdk/event.h 00:02:12.891 TEST_HEADER include/spdk/fd_group.h 00:02:12.891 CC app/spdk_lspci/spdk_lspci.o 00:02:12.891 TEST_HEADER include/spdk/fd.h 00:02:12.891 TEST_HEADER include/spdk/file.h 00:02:12.891 TEST_HEADER include/spdk/ftl.h 00:02:12.891 TEST_HEADER include/spdk/gpt_spec.h 00:02:12.891 TEST_HEADER include/spdk/hexlify.h 00:02:12.891 TEST_HEADER include/spdk/histogram_data.h 00:02:12.891 TEST_HEADER include/spdk/idxd_spec.h 00:02:12.891 TEST_HEADER include/spdk/idxd.h 00:02:12.891 TEST_HEADER include/spdk/init.h 00:02:12.891 TEST_HEADER include/spdk/ioat.h 00:02:12.891 TEST_HEADER include/spdk/ioat_spec.h 00:02:12.891 TEST_HEADER include/spdk/iscsi_spec.h 00:02:12.891 TEST_HEADER include/spdk/jsonrpc.h 00:02:12.891 TEST_HEADER include/spdk/json.h 00:02:12.891 CC app/iscsi_tgt/iscsi_tgt.o 00:02:12.891 TEST_HEADER include/spdk/keyring.h 00:02:12.891 TEST_HEADER include/spdk/keyring_module.h 00:02:12.891 TEST_HEADER include/spdk/likely.h 00:02:12.891 TEST_HEADER include/spdk/lvol.h 00:02:12.891 TEST_HEADER include/spdk/log.h 00:02:12.891 TEST_HEADER include/spdk/memory.h 00:02:12.891 TEST_HEADER include/spdk/mmio.h 00:02:12.891 CC app/nvmf_tgt/nvmf_main.o 00:02:12.891 TEST_HEADER include/spdk/nbd.h 00:02:12.891 TEST_HEADER include/spdk/notify.h 00:02:12.891 TEST_HEADER include/spdk/nvme.h 00:02:12.891 TEST_HEADER include/spdk/nvme_intel.h 00:02:12.891 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:12.891 TEST_HEADER include/spdk/nvme_spec.h 00:02:12.891 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:12.891 TEST_HEADER include/spdk/nvme_zns.h 00:02:12.891 CC app/spdk_dd/spdk_dd.o 00:02:12.891 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:12.891 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:12.891 TEST_HEADER include/spdk/nvmf.h 00:02:12.891 TEST_HEADER include/spdk/nvmf_spec.h 00:02:12.891 TEST_HEADER include/spdk/nvmf_transport.h 00:02:12.891 TEST_HEADER include/spdk/opal.h 00:02:12.891 TEST_HEADER include/spdk/pipe.h 00:02:12.891 TEST_HEADER include/spdk/opal_spec.h 00:02:12.891 TEST_HEADER include/spdk/queue.h 00:02:12.891 TEST_HEADER include/spdk/pci_ids.h 00:02:12.891 TEST_HEADER include/spdk/reduce.h 00:02:12.891 TEST_HEADER include/spdk/rpc.h 00:02:12.891 TEST_HEADER include/spdk/scheduler.h 00:02:12.891 CC app/spdk_tgt/spdk_tgt.o 00:02:12.891 TEST_HEADER include/spdk/scsi.h 00:02:12.891 TEST_HEADER include/spdk/sock.h 00:02:12.891 TEST_HEADER include/spdk/stdinc.h 00:02:12.891 TEST_HEADER include/spdk/scsi_spec.h 00:02:12.891 TEST_HEADER include/spdk/string.h 00:02:12.891 TEST_HEADER include/spdk/thread.h 00:02:12.891 TEST_HEADER include/spdk/tree.h 00:02:12.891 TEST_HEADER include/spdk/trace.h 00:02:12.891 TEST_HEADER include/spdk/ublk.h 00:02:12.891 TEST_HEADER include/spdk/trace_parser.h 00:02:12.891 TEST_HEADER include/spdk/util.h 00:02:12.891 CC app/vhost/vhost.o 00:02:12.891 TEST_HEADER include/spdk/version.h 00:02:12.891 TEST_HEADER include/spdk/uuid.h 00:02:12.891 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:12.891 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:12.891 TEST_HEADER include/spdk/vhost.h 00:02:12.891 TEST_HEADER include/spdk/xor.h 00:02:12.891 TEST_HEADER include/spdk/vmd.h 00:02:12.891 TEST_HEADER include/spdk/zipf.h 00:02:12.891 CXX test/cpp_headers/accel.o 00:02:12.891 CXX test/cpp_headers/accel_module.o 00:02:12.891 CXX test/cpp_headers/barrier.o 00:02:12.891 CXX test/cpp_headers/base64.o 00:02:12.891 CXX test/cpp_headers/assert.o 00:02:12.891 CXX test/cpp_headers/bdev.o 00:02:12.891 CXX test/cpp_headers/bdev_zone.o 00:02:12.891 CXX test/cpp_headers/bdev_module.o 00:02:12.891 CXX test/cpp_headers/bit_array.o 00:02:12.891 CXX test/cpp_headers/bit_pool.o 00:02:12.891 CXX test/cpp_headers/blob_bdev.o 00:02:12.891 CXX test/cpp_headers/blobfs_bdev.o 00:02:12.891 CXX test/cpp_headers/blobfs.o 00:02:12.891 CXX test/cpp_headers/blob.o 00:02:12.891 CXX test/cpp_headers/conf.o 00:02:12.891 CXX test/cpp_headers/config.o 00:02:12.891 CXX test/cpp_headers/cpuset.o 00:02:12.891 CXX test/cpp_headers/crc16.o 00:02:12.891 CXX test/cpp_headers/crc32.o 00:02:12.891 CXX test/cpp_headers/crc64.o 00:02:13.150 CXX test/cpp_headers/dif.o 00:02:13.150 CXX test/cpp_headers/dma.o 00:02:13.150 CC examples/nvme/reconnect/reconnect.o 00:02:13.150 CC examples/ioat/verify/verify.o 00:02:13.150 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:13.150 CC test/event/event_perf/event_perf.o 00:02:13.150 CC examples/sock/hello_world/hello_sock.o 00:02:13.150 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:13.150 CC examples/nvme/hotplug/hotplug.o 00:02:13.150 CC examples/accel/perf/accel_perf.o 00:02:13.150 CC examples/ioat/perf/perf.o 00:02:13.150 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:13.150 CC examples/vmd/lsvmd/lsvmd.o 00:02:13.150 CC examples/nvme/arbitration/arbitration.o 00:02:13.150 CC test/app/jsoncat/jsoncat.o 00:02:13.150 CC test/app/histogram_perf/histogram_perf.o 00:02:13.150 CC test/event/reactor_perf/reactor_perf.o 00:02:13.150 CC examples/nvme/hello_world/hello_world.o 00:02:13.150 CC test/app/stub/stub.o 00:02:13.150 CC test/env/vtophys/vtophys.o 00:02:13.150 CC test/event/reactor/reactor.o 00:02:13.150 CC examples/idxd/perf/perf.o 00:02:13.150 CC test/nvme/reset/reset.o 00:02:13.150 CC examples/bdev/hello_world/hello_bdev.o 00:02:13.150 CC test/nvme/overhead/overhead.o 00:02:13.150 CC examples/blob/cli/blobcli.o 00:02:13.150 CC examples/nvmf/nvmf/nvmf.o 00:02:13.150 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:13.150 CC app/fio/nvme/fio_plugin.o 00:02:13.150 CC examples/nvme/abort/abort.o 00:02:13.150 CC test/nvme/err_injection/err_injection.o 00:02:13.150 CC examples/vmd/led/led.o 00:02:13.150 CC examples/util/zipf/zipf.o 00:02:13.150 CC test/env/pci/pci_ut.o 00:02:13.150 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:13.150 CC test/nvme/fused_ordering/fused_ordering.o 00:02:13.150 CC test/nvme/sgl/sgl.o 00:02:13.150 CC test/event/app_repeat/app_repeat.o 00:02:13.150 CC test/nvme/boot_partition/boot_partition.o 00:02:13.150 CC test/nvme/startup/startup.o 00:02:13.150 CC test/thread/poller_perf/poller_perf.o 00:02:13.150 CC test/nvme/aer/aer.o 00:02:13.150 CC test/nvme/compliance/nvme_compliance.o 00:02:13.150 CC test/dma/test_dma/test_dma.o 00:02:13.150 CC test/accel/dif/dif.o 00:02:13.150 CC test/nvme/reserve/reserve.o 00:02:13.150 CC test/env/memory/memory_ut.o 00:02:13.150 CC test/nvme/simple_copy/simple_copy.o 00:02:13.150 CC test/blobfs/mkfs/mkfs.o 00:02:13.151 CC test/nvme/cuse/cuse.o 00:02:13.151 CC test/nvme/e2edp/nvme_dp.o 00:02:13.151 CC test/nvme/fdp/fdp.o 00:02:13.151 CC test/app/bdev_svc/bdev_svc.o 00:02:13.151 CC examples/bdev/bdevperf/bdevperf.o 00:02:13.151 CC examples/thread/thread/thread_ex.o 00:02:13.151 CC test/nvme/connect_stress/connect_stress.o 00:02:13.151 CC test/bdev/bdevio/bdevio.o 00:02:13.151 CC examples/blob/hello_world/hello_blob.o 00:02:13.151 CC test/event/scheduler/scheduler.o 00:02:13.151 CC app/fio/bdev/fio_plugin.o 00:02:13.415 LINK rpc_client_test 00:02:13.415 LINK spdk_lspci 00:02:13.415 LINK spdk_nvme_discover 00:02:13.415 LINK nvmf_tgt 00:02:13.415 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:13.415 CC test/env/mem_callbacks/mem_callbacks.o 00:02:13.415 LINK interrupt_tgt 00:02:13.415 LINK vhost 00:02:13.415 LINK iscsi_tgt 00:02:13.415 CC test/lvol/esnap/esnap.o 00:02:13.415 LINK spdk_tgt 00:02:13.415 LINK spdk_trace_record 00:02:13.415 LINK lsvmd 00:02:13.675 LINK reactor 00:02:13.675 LINK vtophys 00:02:13.675 LINK histogram_perf 00:02:13.675 CXX test/cpp_headers/endian.o 00:02:13.675 CXX test/cpp_headers/env_dpdk.o 00:02:13.675 CXX test/cpp_headers/env.o 00:02:13.675 CXX test/cpp_headers/event.o 00:02:13.675 LINK event_perf 00:02:13.675 LINK reactor_perf 00:02:13.675 CXX test/cpp_headers/fd_group.o 00:02:13.675 LINK stub 00:02:13.675 LINK zipf 00:02:13.675 LINK jsoncat 00:02:13.675 CXX test/cpp_headers/fd.o 00:02:13.675 CXX test/cpp_headers/file.o 00:02:13.675 LINK led 00:02:13.675 CXX test/cpp_headers/ftl.o 00:02:13.675 LINK pmr_persistence 00:02:13.675 CXX test/cpp_headers/gpt_spec.o 00:02:13.675 LINK cmb_copy 00:02:13.675 CXX test/cpp_headers/histogram_data.o 00:02:13.675 CXX test/cpp_headers/hexlify.o 00:02:13.675 LINK verify 00:02:13.675 LINK app_repeat 00:02:13.675 LINK poller_perf 00:02:13.675 LINK env_dpdk_post_init 00:02:13.675 LINK bdev_svc 00:02:13.675 CXX test/cpp_headers/idxd.o 00:02:13.675 CXX test/cpp_headers/idxd_spec.o 00:02:13.675 LINK connect_stress 00:02:13.675 LINK startup 00:02:13.675 CXX test/cpp_headers/init.o 00:02:13.675 LINK doorbell_aers 00:02:13.675 LINK boot_partition 00:02:13.675 LINK err_injection 00:02:13.675 CXX test/cpp_headers/ioat.o 00:02:13.675 CXX test/cpp_headers/ioat_spec.o 00:02:13.675 CXX test/cpp_headers/iscsi_spec.o 00:02:13.675 CXX test/cpp_headers/json.o 00:02:13.675 LINK ioat_perf 00:02:13.675 CXX test/cpp_headers/jsonrpc.o 00:02:13.675 LINK fused_ordering 00:02:13.675 CXX test/cpp_headers/keyring.o 00:02:13.675 LINK hello_world 00:02:13.675 LINK hotplug 00:02:13.675 LINK reserve 00:02:13.675 LINK hello_sock 00:02:13.675 CXX test/cpp_headers/keyring_module.o 00:02:13.675 LINK hello_blob 00:02:13.675 LINK spdk_dd 00:02:13.675 CXX test/cpp_headers/likely.o 00:02:13.675 LINK sgl 00:02:13.675 LINK mkfs 00:02:13.675 LINK nvmf 00:02:13.675 LINK hello_bdev 00:02:13.675 LINK simple_copy 00:02:13.935 LINK scheduler 00:02:13.935 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:13.935 LINK reset 00:02:13.935 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:13.935 CXX test/cpp_headers/log.o 00:02:13.935 CXX test/cpp_headers/lvol.o 00:02:13.935 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:13.935 CXX test/cpp_headers/memory.o 00:02:13.935 CXX test/cpp_headers/mmio.o 00:02:13.935 LINK spdk_trace 00:02:13.935 CXX test/cpp_headers/nbd.o 00:02:13.935 LINK nvme_dp 00:02:13.935 LINK overhead 00:02:13.935 LINK thread 00:02:13.935 CXX test/cpp_headers/notify.o 00:02:13.935 LINK arbitration 00:02:13.935 CXX test/cpp_headers/nvme_intel.o 00:02:13.935 LINK aer 00:02:13.935 CXX test/cpp_headers/nvme.o 00:02:13.935 LINK idxd_perf 00:02:13.935 CXX test/cpp_headers/nvme_ocssd.o 00:02:13.935 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:13.935 CXX test/cpp_headers/nvme_spec.o 00:02:13.935 LINK abort 00:02:13.935 CXX test/cpp_headers/nvme_zns.o 00:02:13.935 LINK fdp 00:02:13.935 CXX test/cpp_headers/nvmf_cmd.o 00:02:13.935 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:13.935 LINK reconnect 00:02:13.935 CXX test/cpp_headers/nvmf.o 00:02:13.935 CXX test/cpp_headers/nvmf_spec.o 00:02:13.935 LINK nvme_compliance 00:02:13.935 CXX test/cpp_headers/nvmf_transport.o 00:02:13.935 CXX test/cpp_headers/opal.o 00:02:13.935 CXX test/cpp_headers/opal_spec.o 00:02:13.935 CXX test/cpp_headers/pci_ids.o 00:02:13.935 CXX test/cpp_headers/pipe.o 00:02:13.935 CXX test/cpp_headers/queue.o 00:02:13.935 CXX test/cpp_headers/reduce.o 00:02:13.935 CXX test/cpp_headers/rpc.o 00:02:13.935 CXX test/cpp_headers/scheduler.o 00:02:13.935 CXX test/cpp_headers/scsi.o 00:02:13.935 CXX test/cpp_headers/scsi_spec.o 00:02:13.935 CXX test/cpp_headers/sock.o 00:02:13.935 CXX test/cpp_headers/stdinc.o 00:02:13.935 CXX test/cpp_headers/string.o 00:02:13.935 CXX test/cpp_headers/thread.o 00:02:13.935 LINK pci_ut 00:02:13.935 LINK test_dma 00:02:13.935 CXX test/cpp_headers/tree.o 00:02:13.935 CXX test/cpp_headers/trace.o 00:02:13.935 CXX test/cpp_headers/trace_parser.o 00:02:13.935 CXX test/cpp_headers/ublk.o 00:02:14.193 CXX test/cpp_headers/uuid.o 00:02:14.193 CXX test/cpp_headers/util.o 00:02:14.193 CXX test/cpp_headers/version.o 00:02:14.194 LINK accel_perf 00:02:14.194 LINK bdevio 00:02:14.194 CXX test/cpp_headers/vfio_user_pci.o 00:02:14.194 LINK dif 00:02:14.194 CXX test/cpp_headers/vfio_user_spec.o 00:02:14.194 CXX test/cpp_headers/vhost.o 00:02:14.194 CXX test/cpp_headers/vmd.o 00:02:14.194 LINK nvme_manage 00:02:14.194 CXX test/cpp_headers/xor.o 00:02:14.194 CXX test/cpp_headers/zipf.o 00:02:14.194 LINK blobcli 00:02:14.194 LINK nvme_fuzz 00:02:14.194 LINK spdk_nvme_perf 00:02:14.194 LINK spdk_nvme 00:02:14.194 LINK spdk_nvme_identify 00:02:14.452 LINK spdk_bdev 00:02:14.452 LINK mem_callbacks 00:02:14.452 LINK spdk_top 00:02:14.452 LINK vhost_fuzz 00:02:14.710 LINK bdevperf 00:02:14.710 LINK memory_ut 00:02:14.969 LINK cuse 00:02:15.228 LINK iscsi_fuzz 00:02:17.131 LINK esnap 00:02:17.698 00:02:17.698 real 0m43.079s 00:02:17.698 user 6m43.646s 00:02:17.698 sys 3m36.092s 00:02:17.698 20:11:30 make -- common/autotest_common.sh@1122 -- $ xtrace_disable 00:02:17.698 20:11:30 make -- common/autotest_common.sh@10 -- $ set +x 00:02:17.698 ************************************ 00:02:17.698 END TEST make 00:02:17.698 ************************************ 00:02:17.698 20:11:30 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:02:17.698 20:11:30 -- pm/common@29 -- $ signal_monitor_resources TERM 00:02:17.698 20:11:30 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:02:17.698 20:11:30 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:17.698 20:11:30 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:02:17.698 20:11:30 -- pm/common@44 -- $ pid=2761797 00:02:17.698 20:11:30 -- pm/common@50 -- $ kill -TERM 2761797 00:02:17.698 20:11:30 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:17.698 20:11:30 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:02:17.698 20:11:30 -- pm/common@44 -- $ pid=2761799 00:02:17.698 20:11:30 -- pm/common@50 -- $ kill -TERM 2761799 00:02:17.698 20:11:30 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:17.698 20:11:30 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:02:17.698 20:11:30 -- pm/common@44 -- $ pid=2761800 00:02:17.698 20:11:30 -- pm/common@50 -- $ kill -TERM 2761800 00:02:17.698 20:11:30 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:17.698 20:11:30 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:02:17.698 20:11:30 -- pm/common@44 -- $ pid=2761829 00:02:17.698 20:11:30 -- pm/common@50 -- $ sudo -E kill -TERM 2761829 00:02:17.698 20:11:30 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:02:17.698 20:11:30 -- nvmf/common.sh@7 -- # uname -s 00:02:17.698 20:11:30 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:17.698 20:11:30 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:17.698 20:11:30 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:17.698 20:11:30 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:17.698 20:11:30 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:17.698 20:11:30 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:17.698 20:11:30 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:17.698 20:11:30 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:17.698 20:11:30 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:17.698 20:11:30 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:17.698 20:11:30 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:02:17.698 20:11:30 -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:02:17.698 20:11:30 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:17.698 20:11:30 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:17.698 20:11:30 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:02:17.698 20:11:30 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:02:17.698 20:11:30 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:02:17.698 20:11:30 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:17.698 20:11:30 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:17.698 20:11:30 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:17.698 20:11:30 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:17.698 20:11:30 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:17.698 20:11:30 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:17.698 20:11:30 -- paths/export.sh@5 -- # export PATH 00:02:17.698 20:11:30 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:17.698 20:11:30 -- nvmf/common.sh@47 -- # : 0 00:02:17.698 20:11:30 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:02:17.698 20:11:30 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:02:17.698 20:11:30 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:02:17.698 20:11:30 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:17.698 20:11:30 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:17.698 20:11:30 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:02:17.698 20:11:30 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:02:17.698 20:11:30 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:02:17.698 20:11:30 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:17.698 20:11:30 -- spdk/autotest.sh@32 -- # uname -s 00:02:17.698 20:11:30 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:17.698 20:11:30 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:17.698 20:11:30 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/coredumps 00:02:17.698 20:11:30 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:17.698 20:11:30 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/coredumps 00:02:17.698 20:11:30 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:17.698 20:11:30 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:17.698 20:11:30 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:17.698 20:11:30 -- spdk/autotest.sh@48 -- # udevadm_pid=2820036 00:02:17.698 20:11:30 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:02:17.698 20:11:30 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:17.698 20:11:30 -- pm/common@17 -- # local monitor 00:02:17.698 20:11:30 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:17.698 20:11:30 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:17.698 20:11:30 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:17.698 20:11:30 -- pm/common@21 -- # date +%s 00:02:17.698 20:11:30 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:17.698 20:11:30 -- pm/common@21 -- # date +%s 00:02:17.698 20:11:30 -- pm/common@25 -- # sleep 1 00:02:17.698 20:11:30 -- pm/common@21 -- # date +%s 00:02:17.698 20:11:30 -- pm/common@21 -- # date +%s 00:02:17.698 20:11:30 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1715883090 00:02:17.698 20:11:30 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1715883090 00:02:17.698 20:11:30 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1715883090 00:02:17.698 20:11:30 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1715883090 00:02:17.698 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autotest.sh.1715883090_collect-vmstat.pm.log 00:02:17.957 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autotest.sh.1715883090_collect-cpu-temp.pm.log 00:02:17.957 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autotest.sh.1715883090_collect-cpu-load.pm.log 00:02:17.957 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autotest.sh.1715883090_collect-bmc-pm.bmc.pm.log 00:02:18.895 20:11:31 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:18.895 20:11:31 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:02:18.895 20:11:31 -- common/autotest_common.sh@720 -- # xtrace_disable 00:02:18.895 20:11:31 -- common/autotest_common.sh@10 -- # set +x 00:02:18.895 20:11:31 -- spdk/autotest.sh@59 -- # create_test_list 00:02:18.895 20:11:31 -- common/autotest_common.sh@744 -- # xtrace_disable 00:02:18.895 20:11:31 -- common/autotest_common.sh@10 -- # set +x 00:02:18.896 20:11:31 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/autotest.sh 00:02:18.896 20:11:31 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:02:18.896 20:11:31 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:02:18.896 20:11:31 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-phy-autotest/spdk/../output 00:02:18.896 20:11:31 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:02:18.896 20:11:31 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:02:18.896 20:11:31 -- common/autotest_common.sh@1451 -- # uname 00:02:18.896 20:11:31 -- common/autotest_common.sh@1451 -- # '[' Linux = FreeBSD ']' 00:02:18.896 20:11:31 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:02:18.896 20:11:31 -- common/autotest_common.sh@1471 -- # uname 00:02:18.896 20:11:31 -- common/autotest_common.sh@1471 -- # [[ Linux = FreeBSD ]] 00:02:18.896 20:11:31 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:02:18.896 20:11:31 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:02:18.896 20:11:31 -- spdk/autotest.sh@72 -- # hash lcov 00:02:18.896 20:11:31 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:02:18.896 20:11:31 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:02:18.896 --rc lcov_branch_coverage=1 00:02:18.896 --rc lcov_function_coverage=1 00:02:18.896 --rc genhtml_branch_coverage=1 00:02:18.896 --rc genhtml_function_coverage=1 00:02:18.896 --rc genhtml_legend=1 00:02:18.896 --rc geninfo_all_blocks=1 00:02:18.896 ' 00:02:18.896 20:11:31 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:02:18.896 --rc lcov_branch_coverage=1 00:02:18.896 --rc lcov_function_coverage=1 00:02:18.896 --rc genhtml_branch_coverage=1 00:02:18.896 --rc genhtml_function_coverage=1 00:02:18.896 --rc genhtml_legend=1 00:02:18.896 --rc geninfo_all_blocks=1 00:02:18.896 ' 00:02:18.896 20:11:31 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:02:18.896 --rc lcov_branch_coverage=1 00:02:18.896 --rc lcov_function_coverage=1 00:02:18.896 --rc genhtml_branch_coverage=1 00:02:18.896 --rc genhtml_function_coverage=1 00:02:18.896 --rc genhtml_legend=1 00:02:18.896 --rc geninfo_all_blocks=1 00:02:18.896 --no-external' 00:02:18.896 20:11:31 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:02:18.896 --rc lcov_branch_coverage=1 00:02:18.896 --rc lcov_function_coverage=1 00:02:18.896 --rc genhtml_branch_coverage=1 00:02:18.896 --rc genhtml_function_coverage=1 00:02:18.896 --rc genhtml_legend=1 00:02:18.896 --rc geninfo_all_blocks=1 00:02:18.896 --no-external' 00:02:18.896 20:11:31 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:02:18.896 lcov: LCOV version 1.14 00:02:18.896 20:11:31 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_base.info 00:02:28.871 /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:02:28.871 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:02:41.108 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:02:41.108 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:02:41.108 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:02:41.108 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:02:41.108 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:02:41.108 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:02:41.108 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:02:41.108 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:02:41.108 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:02:41.108 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:02:41.108 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:02:41.108 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:02:41.108 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:02:41.108 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:02:41.108 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:02:41.108 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:02:41.108 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:02:41.108 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:02:41.108 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:02:41.108 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:02:41.108 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:02:41.108 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:02:41.108 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:02:41.108 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:02:41.108 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:02:41.108 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:02:41.108 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:02:41.108 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:02:41.108 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:02:41.108 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:02:41.108 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:02:41.108 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:02:41.108 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:02:41.108 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/config.gcno 00:02:41.108 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:02:41.108 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:02:41.108 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:02:41.108 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:02:41.108 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:02:41.108 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:02:41.108 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:02:41.108 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:02:41.108 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:02:41.108 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:02:41.108 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:02:41.108 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:02:41.108 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:02:41.108 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/env.gcno 00:02:41.108 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:02:41.108 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/event.gcno 00:02:41.108 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:02:41.108 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:02:41.108 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:02:41.108 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:02:41.108 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:02:41.108 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:02:41.108 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:02:41.108 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/file.gcno 00:02:41.108 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:02:41.108 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:02:41.108 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:02:41.108 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:02:41.108 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:02:41.108 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:02:41.108 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:02:41.109 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:02:41.109 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:02:41.109 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:02:41.109 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:02:41.109 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:02:41.109 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:02:41.109 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/init.gcno 00:02:41.109 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:02:41.109 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:02:41.109 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:02:41.109 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:02:41.109 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:02:41.109 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:02:41.109 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:02:41.109 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:02:41.109 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:02:41.109 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/json.gcno 00:02:41.109 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:02:41.109 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:02:41.109 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:02:41.109 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:02:41.109 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:02:41.109 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:02:41.109 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:02:41.109 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:02:41.109 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:02:41.109 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/log.gcno 00:02:41.109 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:02:41.109 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:02:41.109 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:02:41.109 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:02:41.109 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:02:41.109 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:02:41.109 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:02:41.109 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:02:41.109 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:02:41.109 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:02:41.109 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:02:41.109 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:02:41.109 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:02:41.109 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:02:41.109 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:02:41.109 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:02:41.109 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:02:41.109 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:02:41.109 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:02:41.109 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:02:41.109 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:02:41.109 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:02:41.109 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:02:41.109 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:02:41.109 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:02:41.109 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:02:41.109 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:02:41.109 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:02:41.109 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:02:41.109 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:02:41.109 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:02:41.109 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:02:41.109 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:02:41.109 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:02:41.109 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:02:41.109 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:02:41.109 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:02:41.109 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:02:41.109 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:02:41.109 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:02:41.109 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:02:41.109 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:02:41.109 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:02:41.109 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:02:41.109 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:02:41.109 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:02:41.109 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:02:41.109 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:02:41.109 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:02:41.109 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:02:41.109 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:02:41.109 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:02:41.109 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:02:41.109 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:02:41.109 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:02:41.109 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:02:41.109 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:02:41.109 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/string.gcno 00:02:41.109 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:02:41.109 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:02:41.109 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:02:41.109 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:02:41.109 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:02:41.109 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:02:41.109 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:02:41.109 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:02:41.109 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:02:41.109 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:02:41.109 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:02:41.109 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/version.gcno 00:02:41.109 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:02:41.109 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/util.gcno 00:02:41.109 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:02:41.109 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:02:41.109 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:02:41.109 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:02:41.109 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:02:41.109 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:02:41.109 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:02:41.109 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:02:41.109 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:02:41.109 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:02:41.109 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:02:41.110 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:02:42.045 20:11:54 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:02:42.045 20:11:54 -- common/autotest_common.sh@720 -- # xtrace_disable 00:02:42.045 20:11:54 -- common/autotest_common.sh@10 -- # set +x 00:02:42.045 20:11:54 -- spdk/autotest.sh@91 -- # rm -f 00:02:42.045 20:11:54 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:02:44.576 0000:5f:00.0 (8086 0a54): Already using the nvme driver 00:02:44.576 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:02:44.576 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:02:44.576 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:02:44.576 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:02:44.836 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:02:44.836 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:02:44.836 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:02:44.836 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:02:44.836 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:02:44.836 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:02:44.836 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:02:44.836 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:02:44.836 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:02:44.836 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:02:44.836 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:02:44.836 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:02:44.836 20:11:57 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:02:44.836 20:11:57 -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:02:44.836 20:11:57 -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:02:44.836 20:11:57 -- common/autotest_common.sh@1666 -- # local nvme bdf 00:02:44.836 20:11:57 -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:02:44.836 20:11:57 -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n1 00:02:45.094 20:11:57 -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:02:45.094 20:11:57 -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:45.094 20:11:57 -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:02:45.094 20:11:57 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:02:45.094 20:11:57 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:02:45.094 20:11:57 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:02:45.094 20:11:57 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:02:45.094 20:11:57 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:02:45.094 20:11:57 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:02:45.094 No valid GPT data, bailing 00:02:45.094 20:11:57 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:02:45.094 20:11:57 -- scripts/common.sh@391 -- # pt= 00:02:45.094 20:11:57 -- scripts/common.sh@392 -- # return 1 00:02:45.094 20:11:57 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:02:45.094 1+0 records in 00:02:45.094 1+0 records out 00:02:45.094 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00210088 s, 499 MB/s 00:02:45.094 20:11:57 -- spdk/autotest.sh@118 -- # sync 00:02:45.094 20:11:57 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:02:45.094 20:11:57 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:02:45.094 20:11:57 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:02:50.364 20:12:02 -- spdk/autotest.sh@124 -- # uname -s 00:02:50.364 20:12:02 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:02:50.364 20:12:02 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/test-setup.sh 00:02:50.364 20:12:02 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:02:50.364 20:12:02 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:02:50.364 20:12:02 -- common/autotest_common.sh@10 -- # set +x 00:02:50.364 ************************************ 00:02:50.364 START TEST setup.sh 00:02:50.364 ************************************ 00:02:50.364 20:12:02 setup.sh -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/test-setup.sh 00:02:50.364 * Looking for test storage... 00:02:50.364 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup 00:02:50.364 20:12:02 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:02:50.364 20:12:02 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:02:50.364 20:12:02 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/acl.sh 00:02:50.364 20:12:02 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:02:50.364 20:12:02 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:02:50.364 20:12:02 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:02:50.364 ************************************ 00:02:50.364 START TEST acl 00:02:50.364 ************************************ 00:02:50.364 20:12:02 setup.sh.acl -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/acl.sh 00:02:50.364 * Looking for test storage... 00:02:50.364 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup 00:02:50.364 20:12:02 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:02:50.364 20:12:02 setup.sh.acl -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:02:50.364 20:12:02 setup.sh.acl -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:02:50.364 20:12:02 setup.sh.acl -- common/autotest_common.sh@1666 -- # local nvme bdf 00:02:50.364 20:12:02 setup.sh.acl -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:02:50.364 20:12:02 setup.sh.acl -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n1 00:02:50.364 20:12:02 setup.sh.acl -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:02:50.364 20:12:02 setup.sh.acl -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:50.364 20:12:02 setup.sh.acl -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:02:50.364 20:12:02 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:02:50.364 20:12:02 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:02:50.364 20:12:02 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:02:50.364 20:12:02 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:02:50.364 20:12:02 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:02:50.364 20:12:02 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:50.364 20:12:02 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:02:53.649 20:12:06 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:02:53.649 20:12:06 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:02:53.649 20:12:06 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:53.649 20:12:06 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:02:53.649 20:12:06 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:02:53.649 20:12:06 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh status 00:02:56.181 Hugepages 00:02:56.181 node hugesize free / total 00:02:56.181 20:12:08 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:02:56.181 20:12:08 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:56.181 20:12:08 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:56.181 20:12:08 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:02:56.181 20:12:08 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:56.181 20:12:08 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:56.181 20:12:08 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:02:56.181 20:12:08 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:56.181 20:12:08 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:56.181 00:02:56.181 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:56.181 20:12:08 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:02:56.181 20:12:08 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:56.181 20:12:08 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:56.181 20:12:08 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:02:56.181 20:12:08 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:56.181 20:12:08 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:56.181 20:12:08 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:56.181 20:12:08 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:02:56.181 20:12:08 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:56.181 20:12:08 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:56.181 20:12:08 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:56.181 20:12:08 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:02:56.181 20:12:08 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:56.181 20:12:08 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:56.181 20:12:08 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:56.181 20:12:08 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:02:56.181 20:12:08 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:56.181 20:12:08 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:56.181 20:12:08 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:56.181 20:12:08 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:02:56.181 20:12:08 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:56.181 20:12:08 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:56.181 20:12:08 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:56.181 20:12:08 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:02:56.181 20:12:08 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:56.181 20:12:08 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:56.181 20:12:08 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:56.181 20:12:08 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:02:56.181 20:12:08 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:56.181 20:12:08 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:56.181 20:12:08 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:56.181 20:12:08 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:02:56.181 20:12:08 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:56.181 20:12:08 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:56.181 20:12:08 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:56.181 20:12:09 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:5f:00.0 == *:*:*.* ]] 00:02:56.181 20:12:09 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:02:56.181 20:12:09 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\5\f\:\0\0\.\0* ]] 00:02:56.181 20:12:09 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:02:56.181 20:12:09 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:02:56.181 20:12:09 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:56.181 20:12:09 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:02:56.181 20:12:09 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:56.181 20:12:09 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:56.181 20:12:09 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:56.181 20:12:09 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:02:56.181 20:12:09 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:56.181 20:12:09 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:56.181 20:12:09 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:56.181 20:12:09 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:02:56.181 20:12:09 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:56.181 20:12:09 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:56.181 20:12:09 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:56.181 20:12:09 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:02:56.181 20:12:09 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:56.181 20:12:09 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:56.181 20:12:09 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:56.181 20:12:09 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:02:56.181 20:12:09 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:56.181 20:12:09 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:56.181 20:12:09 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:56.181 20:12:09 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:02:56.181 20:12:09 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:56.181 20:12:09 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:56.181 20:12:09 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:56.181 20:12:09 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:02:56.181 20:12:09 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:56.181 20:12:09 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:56.181 20:12:09 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:56.181 20:12:09 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:02:56.181 20:12:09 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:56.181 20:12:09 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:56.181 20:12:09 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:56.181 20:12:09 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:02:56.181 20:12:09 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:02:56.181 20:12:09 setup.sh.acl -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:02:56.181 20:12:09 setup.sh.acl -- common/autotest_common.sh@1103 -- # xtrace_disable 00:02:56.181 20:12:09 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:02:56.181 ************************************ 00:02:56.181 START TEST denied 00:02:56.181 ************************************ 00:02:56.181 20:12:09 setup.sh.acl.denied -- common/autotest_common.sh@1121 -- # denied 00:02:56.181 20:12:09 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:5f:00.0' 00:02:56.181 20:12:09 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:02:56.181 20:12:09 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:5f:00.0' 00:02:56.181 20:12:09 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:02:56.181 20:12:09 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:02:59.467 0000:5f:00.0 (8086 0a54): Skipping denied controller at 0000:5f:00.0 00:02:59.467 20:12:12 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:5f:00.0 00:02:59.467 20:12:12 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:02:59.467 20:12:12 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:02:59.467 20:12:12 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:5f:00.0 ]] 00:02:59.467 20:12:12 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:5f:00.0/driver 00:02:59.467 20:12:12 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:02:59.467 20:12:12 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:02:59.467 20:12:12 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:02:59.467 20:12:12 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:59.467 20:12:12 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:03:03.645 00:03:03.645 real 0m7.053s 00:03:03.645 user 0m2.212s 00:03:03.645 sys 0m4.114s 00:03:03.645 20:12:16 setup.sh.acl.denied -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:03.645 20:12:16 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:03:03.645 ************************************ 00:03:03.645 END TEST denied 00:03:03.645 ************************************ 00:03:03.645 20:12:16 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:03:03.645 20:12:16 setup.sh.acl -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:03.645 20:12:16 setup.sh.acl -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:03.645 20:12:16 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:03.645 ************************************ 00:03:03.645 START TEST allowed 00:03:03.645 ************************************ 00:03:03.645 20:12:16 setup.sh.acl.allowed -- common/autotest_common.sh@1121 -- # allowed 00:03:03.645 20:12:16 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:5f:00.0 00:03:03.645 20:12:16 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:03:03.645 20:12:16 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:03:03.645 20:12:16 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:03:03.645 20:12:16 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:5f:00.0 .*: nvme -> .*' 00:03:08.918 0000:5f:00.0 (8086 0a54): nvme -> vfio-pci 00:03:08.919 20:12:20 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:03:08.919 20:12:20 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:03:08.919 20:12:20 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:03:08.919 20:12:20 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:08.919 20:12:20 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:03:11.455 00:03:11.455 real 0m7.860s 00:03:11.455 user 0m2.193s 00:03:11.455 sys 0m4.173s 00:03:11.455 20:12:24 setup.sh.acl.allowed -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:11.455 20:12:24 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:03:11.455 ************************************ 00:03:11.455 END TEST allowed 00:03:11.455 ************************************ 00:03:11.455 00:03:11.455 real 0m21.335s 00:03:11.455 user 0m6.847s 00:03:11.455 sys 0m12.454s 00:03:11.455 20:12:24 setup.sh.acl -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:11.455 20:12:24 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:11.455 ************************************ 00:03:11.455 END TEST acl 00:03:11.455 ************************************ 00:03:11.455 20:12:24 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/hugepages.sh 00:03:11.455 20:12:24 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:11.455 20:12:24 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:11.455 20:12:24 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:11.455 ************************************ 00:03:11.455 START TEST hugepages 00:03:11.455 ************************************ 00:03:11.455 20:12:24 setup.sh.hugepages -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/hugepages.sh 00:03:11.455 * Looking for test storage... 00:03:11.455 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup 00:03:11.455 20:12:24 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:03:11.455 20:12:24 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:03:11.455 20:12:24 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:03:11.455 20:12:24 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:03:11.455 20:12:24 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:03:11.455 20:12:24 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:03:11.455 20:12:24 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:03:11.455 20:12:24 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:03:11.455 20:12:24 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:03:11.455 20:12:24 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:03:11.455 20:12:24 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:11.455 20:12:24 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:11.455 20:12:24 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:11.455 20:12:24 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:03:11.455 20:12:24 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:11.455 20:12:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:11.455 20:12:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:11.455 20:12:24 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381176 kB' 'MemFree: 166372836 kB' 'MemAvailable: 169635988 kB' 'Buffers: 4824 kB' 'Cached: 16657776 kB' 'SwapCached: 0 kB' 'Active: 13765444 kB' 'Inactive: 3533972 kB' 'Active(anon): 13039188 kB' 'Inactive(anon): 0 kB' 'Active(file): 726256 kB' 'Inactive(file): 3533972 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 640200 kB' 'Mapped: 208628 kB' 'Shmem: 12402372 kB' 'KReclaimable: 294280 kB' 'Slab: 929204 kB' 'SReclaimable: 294280 kB' 'SUnreclaim: 634924 kB' 'KernelStack: 20672 kB' 'PageTables: 9392 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 101982040 kB' 'Committed_AS: 14490744 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 318340 kB' 'VmallocChunk: 0 kB' 'Percpu: 87168 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 3814356 kB' 'DirectMap2M: 52488192 kB' 'DirectMap1G: 145752064 kB' 00:03:11.455 20:12:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.455 20:12:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:11.455 20:12:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:11.455 20:12:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:11.455 20:12:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.455 20:12:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:11.455 20:12:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:11.455 20:12:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:11.455 20:12:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.455 20:12:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:11.455 20:12:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:11.455 20:12:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:11.455 20:12:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.455 20:12:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:11.455 20:12:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:11.455 20:12:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:11.455 20:12:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.455 20:12:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:11.455 20:12:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:11.456 20:12:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:11.456 20:12:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.456 20:12:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:11.456 20:12:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:11.456 20:12:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:11.456 20:12:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.456 20:12:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:11.456 20:12:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:11.456 20:12:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:11.456 20:12:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.456 20:12:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:11.456 20:12:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:11.456 20:12:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:11.456 20:12:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.456 20:12:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:11.456 20:12:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:11.456 20:12:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:11.456 20:12:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.456 20:12:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:11.456 20:12:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:11.456 20:12:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:11.456 20:12:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.456 20:12:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:11.456 20:12:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:11.456 20:12:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:11.456 20:12:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.456 20:12:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:11.456 20:12:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:11.456 20:12:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:11.456 20:12:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.456 20:12:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:11.456 20:12:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:11.456 20:12:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:11.456 20:12:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.456 20:12:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:11.456 20:12:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:11.456 20:12:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:11.456 20:12:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.456 20:12:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:11.456 20:12:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:11.456 20:12:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:11.456 20:12:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.456 20:12:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:11.456 20:12:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:11.456 20:12:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:11.456 20:12:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.456 20:12:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:11.456 20:12:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:11.456 20:12:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:11.456 20:12:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.456 20:12:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:11.456 20:12:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:11.456 20:12:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:11.456 20:12:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.456 20:12:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:11.456 20:12:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:11.456 20:12:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:11.456 20:12:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.456 20:12:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:11.456 20:12:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:11.456 20:12:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:11.456 20:12:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.456 20:12:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:11.456 20:12:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:11.456 20:12:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:11.456 20:12:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.456 20:12:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:11.456 20:12:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:11.456 20:12:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:11.456 20:12:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.456 20:12:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:11.456 20:12:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:11.456 20:12:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:11.456 20:12:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.456 20:12:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:11.456 20:12:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:11.456 20:12:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:11.456 20:12:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.456 20:12:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:11.456 20:12:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:11.456 20:12:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:11.456 20:12:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.456 20:12:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:11.456 20:12:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:11.456 20:12:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:11.456 20:12:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.456 20:12:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:11.456 20:12:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:11.456 20:12:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:11.456 20:12:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.456 20:12:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:11.456 20:12:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:11.456 20:12:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:11.456 20:12:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.456 20:12:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:11.456 20:12:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:11.456 20:12:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:11.456 20:12:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.456 20:12:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:11.456 20:12:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:11.456 20:12:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:11.456 20:12:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.456 20:12:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:11.456 20:12:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:11.456 20:12:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:11.456 20:12:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.456 20:12:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:11.456 20:12:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:11.456 20:12:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:11.456 20:12:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.456 20:12:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:11.456 20:12:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:11.456 20:12:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:11.456 20:12:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.456 20:12:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:11.456 20:12:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:11.456 20:12:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:11.456 20:12:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.456 20:12:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:11.456 20:12:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:11.456 20:12:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:11.456 20:12:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.456 20:12:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:11.456 20:12:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:11.456 20:12:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:11.456 20:12:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.456 20:12:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:11.456 20:12:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:11.456 20:12:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:11.456 20:12:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.456 20:12:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:11.456 20:12:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:11.456 20:12:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:11.456 20:12:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.456 20:12:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:11.456 20:12:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:11.456 20:12:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:11.456 20:12:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.456 20:12:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:11.456 20:12:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:11.456 20:12:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:11.456 20:12:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.456 20:12:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:11.456 20:12:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:11.457 20:12:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:11.457 20:12:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.457 20:12:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:11.457 20:12:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:11.457 20:12:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:11.457 20:12:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.457 20:12:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:11.457 20:12:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:11.457 20:12:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:11.457 20:12:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.457 20:12:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:11.457 20:12:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:11.457 20:12:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:11.457 20:12:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.457 20:12:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:11.457 20:12:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:11.457 20:12:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:11.457 20:12:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.457 20:12:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:11.457 20:12:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:11.457 20:12:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:11.457 20:12:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.457 20:12:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:11.457 20:12:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:11.457 20:12:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:11.457 20:12:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.457 20:12:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:11.457 20:12:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:11.457 20:12:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:11.457 20:12:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.457 20:12:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:11.457 20:12:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:11.457 20:12:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:11.457 20:12:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.457 20:12:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:11.457 20:12:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:11.457 20:12:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:11.457 20:12:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.457 20:12:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:11.457 20:12:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:11.457 20:12:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:11.457 20:12:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.457 20:12:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:11.457 20:12:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:11.457 20:12:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:11.457 20:12:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.457 20:12:24 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:03:11.457 20:12:24 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:03:11.457 20:12:24 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:03:11.457 20:12:24 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:03:11.457 20:12:24 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:03:11.457 20:12:24 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:03:11.457 20:12:24 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:03:11.457 20:12:24 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:03:11.457 20:12:24 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:03:11.457 20:12:24 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:03:11.457 20:12:24 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:03:11.457 20:12:24 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:11.457 20:12:24 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:03:11.457 20:12:24 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:11.457 20:12:24 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:11.457 20:12:24 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:11.457 20:12:24 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:11.457 20:12:24 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:03:11.457 20:12:24 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:11.457 20:12:24 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:11.457 20:12:24 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:11.457 20:12:24 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:11.457 20:12:24 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:11.457 20:12:24 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:11.457 20:12:24 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:11.457 20:12:24 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:11.457 20:12:24 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:11.457 20:12:24 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:11.457 20:12:24 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:11.457 20:12:24 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:11.457 20:12:24 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:11.457 20:12:24 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:03:11.457 20:12:24 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:11.457 20:12:24 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:11.457 20:12:24 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:11.457 ************************************ 00:03:11.457 START TEST default_setup 00:03:11.457 ************************************ 00:03:11.457 20:12:24 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1121 -- # default_setup 00:03:11.457 20:12:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:03:11.457 20:12:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:03:11.457 20:12:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:11.457 20:12:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:03:11.457 20:12:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:11.457 20:12:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:03:11.457 20:12:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:11.457 20:12:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:11.457 20:12:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:11.457 20:12:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:11.457 20:12:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:03:11.457 20:12:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:11.457 20:12:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:11.457 20:12:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:11.457 20:12:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:11.457 20:12:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:11.457 20:12:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:11.457 20:12:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:11.457 20:12:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:03:11.457 20:12:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:03:11.457 20:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:03:11.457 20:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:03:14.747 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:14.747 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:14.748 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:14.748 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:14.748 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:14.748 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:14.748 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:14.748 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:14.748 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:14.748 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:14.748 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:14.748 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:14.748 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:14.748 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:14.748 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:14.748 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:16.132 0000:5f:00.0 (8086 0a54): nvme -> vfio-pci 00:03:16.132 20:12:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:03:16.132 20:12:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:03:16.132 20:12:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:03:16.132 20:12:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:03:16.132 20:12:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:03:16.132 20:12:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:03:16.132 20:12:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:03:16.132 20:12:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:16.132 20:12:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:16.132 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:16.132 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:16.132 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:16.132 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:16.132 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:16.132 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:16.132 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:16.132 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:16.132 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:16.132 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.132 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.132 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381176 kB' 'MemFree: 168491492 kB' 'MemAvailable: 171754420 kB' 'Buffers: 4824 kB' 'Cached: 16657888 kB' 'SwapCached: 0 kB' 'Active: 13783632 kB' 'Inactive: 3533972 kB' 'Active(anon): 13057376 kB' 'Inactive(anon): 0 kB' 'Active(file): 726256 kB' 'Inactive(file): 3533972 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 658232 kB' 'Mapped: 208960 kB' 'Shmem: 12402484 kB' 'KReclaimable: 293832 kB' 'Slab: 927468 kB' 'SReclaimable: 293832 kB' 'SUnreclaim: 633636 kB' 'KernelStack: 20864 kB' 'PageTables: 9580 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030616 kB' 'Committed_AS: 14508956 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 318532 kB' 'VmallocChunk: 0 kB' 'Percpu: 87168 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3814356 kB' 'DirectMap2M: 52488192 kB' 'DirectMap1G: 145752064 kB' 00:03:16.132 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.132 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.132 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.132 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.132 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.132 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.132 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.132 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.132 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.132 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.132 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.132 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.132 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.132 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.132 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.132 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.132 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.132 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.132 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.132 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.132 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.132 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.132 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.132 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.132 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.132 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.132 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.132 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.132 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.132 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.132 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.133 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.133 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.133 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.133 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.133 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.133 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.133 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.133 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.133 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.133 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.133 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.133 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.133 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.133 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.133 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.133 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.133 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.133 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.133 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.133 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.133 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.133 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.133 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.133 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.133 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.133 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.133 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.133 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.133 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.133 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.133 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.133 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.133 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.133 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.133 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.133 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.133 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.133 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.133 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.133 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.133 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.133 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.133 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.133 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.133 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.133 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.133 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.133 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.133 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.133 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.133 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.133 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.133 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.133 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.133 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.133 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.133 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.133 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.133 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.133 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.133 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.133 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.133 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.133 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.133 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.133 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.133 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.133 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.133 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.133 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.133 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.133 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.133 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.133 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.133 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.133 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.133 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.133 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.133 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.133 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.133 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.133 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.133 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.133 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.133 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.133 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.133 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.133 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.133 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.133 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.133 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.133 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.133 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.133 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.133 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.133 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.133 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.133 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.133 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.133 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.133 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.133 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.133 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.133 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.133 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.133 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.133 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.133 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.133 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.133 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.133 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.133 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.133 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.133 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.133 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.133 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.133 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.133 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.133 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.133 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.133 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.133 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.133 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.133 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.133 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.133 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.133 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.134 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.134 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.134 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.134 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:16.134 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:16.134 20:12:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:03:16.134 20:12:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:16.134 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:16.134 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:16.134 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:16.134 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:16.134 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:16.134 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:16.134 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:16.134 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:16.134 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:16.134 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.134 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.134 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381176 kB' 'MemFree: 168490228 kB' 'MemAvailable: 171753156 kB' 'Buffers: 4824 kB' 'Cached: 16657904 kB' 'SwapCached: 0 kB' 'Active: 13783496 kB' 'Inactive: 3533972 kB' 'Active(anon): 13057240 kB' 'Inactive(anon): 0 kB' 'Active(file): 726256 kB' 'Inactive(file): 3533972 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 658112 kB' 'Mapped: 208892 kB' 'Shmem: 12402500 kB' 'KReclaimable: 293832 kB' 'Slab: 927472 kB' 'SReclaimable: 293832 kB' 'SUnreclaim: 633640 kB' 'KernelStack: 20800 kB' 'PageTables: 9476 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030616 kB' 'Committed_AS: 14510744 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 318516 kB' 'VmallocChunk: 0 kB' 'Percpu: 87168 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3814356 kB' 'DirectMap2M: 52488192 kB' 'DirectMap1G: 145752064 kB' 00:03:16.134 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.134 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.134 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.134 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.134 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.134 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.134 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.134 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.134 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.134 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.134 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.134 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.134 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.134 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.134 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.134 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.134 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.134 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.134 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.134 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.134 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.134 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.134 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.134 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.134 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.134 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.134 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.134 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.134 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.134 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.134 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.134 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.134 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.134 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.134 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.134 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.134 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.134 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.134 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.134 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.134 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.134 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.134 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.134 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.134 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.134 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.134 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.134 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.134 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.134 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.134 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.134 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.134 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.134 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.134 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.134 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.134 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.134 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.134 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.134 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.134 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.134 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.134 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.134 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.134 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.134 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.134 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.134 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.134 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.134 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.134 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.134 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.134 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.134 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.134 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.134 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.134 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.134 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.134 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.134 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.134 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.134 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.134 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.134 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.134 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.134 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.134 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.134 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.134 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.134 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.134 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.134 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.134 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.134 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.134 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.134 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.135 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.135 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.135 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.135 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.135 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.135 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.135 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.135 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.135 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.135 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.135 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.135 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.135 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.135 20:12:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.135 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.135 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.135 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.135 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.135 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.135 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.135 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.135 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.135 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.135 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.135 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.135 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.135 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.135 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.135 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.135 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.135 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.135 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.135 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.135 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.135 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.135 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.135 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.135 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.135 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.135 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.135 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.135 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.135 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.135 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.135 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.135 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.135 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.135 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.135 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.135 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.135 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.135 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.135 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.135 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.135 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.135 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.135 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.135 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.135 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.135 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.135 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.135 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.135 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.135 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.135 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.135 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.135 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.135 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.135 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.135 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.135 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.135 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.135 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.135 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.135 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.135 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.135 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.135 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.135 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.135 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.135 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.135 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.135 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.135 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.135 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.135 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.135 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.135 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.135 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.135 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.135 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.135 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.135 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.135 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.135 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.135 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.135 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.135 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.135 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.135 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.135 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.135 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.135 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.135 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.135 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.135 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.135 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.135 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.135 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.135 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:16.135 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:16.135 20:12:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:03:16.135 20:12:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:16.135 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:16.135 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:16.135 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:16.135 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:16.135 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:16.135 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:16.135 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:16.135 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:16.135 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:16.135 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.135 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.136 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381176 kB' 'MemFree: 168487856 kB' 'MemAvailable: 171750784 kB' 'Buffers: 4824 kB' 'Cached: 16657924 kB' 'SwapCached: 0 kB' 'Active: 13783904 kB' 'Inactive: 3533972 kB' 'Active(anon): 13057648 kB' 'Inactive(anon): 0 kB' 'Active(file): 726256 kB' 'Inactive(file): 3533972 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 658528 kB' 'Mapped: 208896 kB' 'Shmem: 12402520 kB' 'KReclaimable: 293832 kB' 'Slab: 927472 kB' 'SReclaimable: 293832 kB' 'SUnreclaim: 633640 kB' 'KernelStack: 20928 kB' 'PageTables: 9760 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030616 kB' 'Committed_AS: 14510984 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 318580 kB' 'VmallocChunk: 0 kB' 'Percpu: 87168 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3814356 kB' 'DirectMap2M: 52488192 kB' 'DirectMap1G: 145752064 kB' 00:03:16.136 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.136 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.136 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.136 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.136 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.136 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.136 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.136 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.136 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.136 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.136 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.136 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.136 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.136 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.136 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.136 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.136 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.136 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.136 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.136 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.136 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.136 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.136 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.136 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.136 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.136 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.136 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.136 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.136 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.136 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.136 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.136 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.136 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.136 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.136 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.136 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.136 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.136 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.136 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.136 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.136 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.136 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.136 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.136 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.136 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.136 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.136 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.136 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.136 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.136 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.136 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.136 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.136 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.136 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.136 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.136 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.136 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.136 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.136 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.136 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.136 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.136 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.136 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.136 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.136 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.136 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.136 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.136 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.136 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.136 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.136 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.136 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.136 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.136 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.136 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.136 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.136 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.136 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.136 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.136 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.136 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.136 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.136 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.136 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.136 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.136 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.136 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.136 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.136 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.137 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.137 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.137 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.137 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.137 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.137 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.137 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.137 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.137 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.137 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.137 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.137 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.137 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.137 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.137 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.137 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.137 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.137 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.137 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.137 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.137 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.137 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.137 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.137 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.137 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.137 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.137 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.137 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.137 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.137 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.137 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.137 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.137 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.137 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.137 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.137 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.137 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.137 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.137 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.137 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.137 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.137 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.137 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.137 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.137 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.137 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.137 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.137 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.137 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.137 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.137 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.137 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.137 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.137 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.137 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.137 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.137 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.137 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.137 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.137 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.137 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.137 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.137 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.137 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.137 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.137 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.137 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.137 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.137 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.137 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.137 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.137 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.137 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.137 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.137 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.137 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.137 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.137 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.137 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.137 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.137 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.137 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.137 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.137 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.137 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.137 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.137 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.137 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.137 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.137 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.137 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.137 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.137 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.137 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.137 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.137 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.137 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.137 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.137 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.137 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.137 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.137 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.137 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.137 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.137 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.137 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.137 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.137 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.137 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.137 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.137 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.137 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.137 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:16.137 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:16.137 20:12:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:03:16.137 20:12:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:16.137 nr_hugepages=1024 00:03:16.137 20:12:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:16.137 resv_hugepages=0 00:03:16.137 20:12:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:16.137 surplus_hugepages=0 00:03:16.137 20:12:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:16.137 anon_hugepages=0 00:03:16.137 20:12:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:16.137 20:12:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:16.137 20:12:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:16.138 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:16.138 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:16.138 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:16.138 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:16.138 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:16.138 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:16.138 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:16.138 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:16.138 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:16.138 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.138 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.138 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381176 kB' 'MemFree: 168487512 kB' 'MemAvailable: 171750440 kB' 'Buffers: 4824 kB' 'Cached: 16657944 kB' 'SwapCached: 0 kB' 'Active: 13784088 kB' 'Inactive: 3533972 kB' 'Active(anon): 13057832 kB' 'Inactive(anon): 0 kB' 'Active(file): 726256 kB' 'Inactive(file): 3533972 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 658744 kB' 'Mapped: 208896 kB' 'Shmem: 12402540 kB' 'KReclaimable: 293832 kB' 'Slab: 927472 kB' 'SReclaimable: 293832 kB' 'SUnreclaim: 633640 kB' 'KernelStack: 20752 kB' 'PageTables: 9436 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030616 kB' 'Committed_AS: 14511008 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 318468 kB' 'VmallocChunk: 0 kB' 'Percpu: 87168 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3814356 kB' 'DirectMap2M: 52488192 kB' 'DirectMap1G: 145752064 kB' 00:03:16.138 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.138 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.138 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.138 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.138 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.138 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.138 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.138 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.138 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.138 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.138 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.138 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.138 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.138 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.138 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.138 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.138 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.138 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.138 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.138 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.138 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.138 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.138 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.138 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.138 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.138 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.138 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.138 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.138 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.138 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.138 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.138 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.138 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.138 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.138 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.138 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.138 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.138 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.138 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.138 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.138 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.138 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.138 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.138 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.138 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.138 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.138 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.138 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.138 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.138 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.138 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.138 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.138 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.138 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.138 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.138 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.138 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.138 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.138 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.138 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.138 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.138 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.138 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.138 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.138 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.138 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.138 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.138 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.138 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.138 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.138 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.138 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.138 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.138 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.138 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.138 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.138 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.138 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.138 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.138 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.138 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.138 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.138 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.138 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.138 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.138 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.138 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.138 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.138 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.138 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.138 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.138 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.138 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.138 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.138 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.138 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.138 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.138 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.138 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.138 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.139 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.139 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.139 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.139 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.139 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.139 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.139 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.139 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.139 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.139 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.139 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.139 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.139 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.139 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.139 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.139 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.139 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.139 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.139 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.139 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.139 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.139 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.139 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.139 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.139 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.139 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.139 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.139 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.139 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.139 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.139 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.139 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.139 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.139 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.139 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.139 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.139 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.139 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.139 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.139 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.139 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.139 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.139 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.139 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.139 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.139 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.139 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.139 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.139 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.139 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.139 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.139 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.139 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.139 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.139 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.139 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.139 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.139 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.139 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.139 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.139 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.139 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.139 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.139 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.139 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.139 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.139 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.139 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.139 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.139 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.139 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.139 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.139 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.139 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.139 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.139 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.139 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.139 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.139 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.139 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.139 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.139 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.139 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.139 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.139 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.139 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.139 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.139 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.139 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.139 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.139 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.139 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.139 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.139 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:03:16.139 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:16.139 20:12:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:16.139 20:12:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:03:16.139 20:12:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:03:16.139 20:12:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:16.139 20:12:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:16.139 20:12:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:16.139 20:12:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:16.139 20:12:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:16.139 20:12:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:16.139 20:12:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:16.139 20:12:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:16.139 20:12:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:16.139 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:16.139 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:03:16.139 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:16.139 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:16.139 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:16.139 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:16.139 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:16.139 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:16.139 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:16.139 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.139 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.140 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97615628 kB' 'MemFree: 85318736 kB' 'MemUsed: 12296892 kB' 'SwapCached: 0 kB' 'Active: 8359260 kB' 'Inactive: 266804 kB' 'Active(anon): 7924660 kB' 'Inactive(anon): 0 kB' 'Active(file): 434600 kB' 'Inactive(file): 266804 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8205436 kB' 'Mapped: 101388 kB' 'AnonPages: 423836 kB' 'Shmem: 7504032 kB' 'KernelStack: 12472 kB' 'PageTables: 5460 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 134748 kB' 'Slab: 433700 kB' 'SReclaimable: 134748 kB' 'SUnreclaim: 298952 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:16.140 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.140 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.140 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.140 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.140 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.140 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.140 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.140 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.140 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.140 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.140 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.140 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.140 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.140 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.140 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.140 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.140 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.140 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.140 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.140 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.140 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.140 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.140 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.140 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.140 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.140 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.140 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.140 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.140 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.140 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.140 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.140 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.140 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.140 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.140 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.140 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.140 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.140 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.140 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.140 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.140 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.140 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.140 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.140 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.140 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.140 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.140 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.140 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.140 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.140 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.140 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.140 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.140 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.140 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.140 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.140 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.140 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.140 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.140 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.140 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.140 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.140 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.140 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.140 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.140 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.140 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.140 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.140 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.140 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.140 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.140 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.140 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.140 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.140 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.140 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.140 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.140 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.140 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.140 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.140 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.140 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.140 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.140 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.140 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.140 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.140 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.140 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.140 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.140 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.140 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.140 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.140 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.140 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.140 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.140 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.140 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.140 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.140 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.140 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.140 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.140 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.140 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.140 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.140 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.140 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.140 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.140 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.140 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.140 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.140 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.140 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.140 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.140 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.140 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.140 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.140 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.140 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.140 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.140 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.140 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.140 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.140 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.140 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.141 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.141 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.141 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.141 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.141 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.141 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.141 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.141 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.141 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.141 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.141 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.141 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.141 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.141 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.141 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.141 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.141 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.141 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.141 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.141 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.141 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.141 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.141 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:16.141 20:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:16.141 20:12:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:16.141 20:12:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:16.141 20:12:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:16.141 20:12:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:16.141 20:12:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:16.141 node0=1024 expecting 1024 00:03:16.141 20:12:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:16.141 00:03:16.141 real 0m4.735s 00:03:16.141 user 0m1.329s 00:03:16.141 sys 0m2.044s 00:03:16.141 20:12:29 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:16.141 20:12:29 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:03:16.141 ************************************ 00:03:16.141 END TEST default_setup 00:03:16.141 ************************************ 00:03:16.141 20:12:29 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:03:16.141 20:12:29 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:16.141 20:12:29 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:16.141 20:12:29 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:16.400 ************************************ 00:03:16.400 START TEST per_node_1G_alloc 00:03:16.400 ************************************ 00:03:16.400 20:12:29 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1121 -- # per_node_1G_alloc 00:03:16.400 20:12:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:03:16.400 20:12:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:03:16.400 20:12:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:16.400 20:12:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:03:16.400 20:12:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:03:16.400 20:12:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:03:16.400 20:12:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:16.400 20:12:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:16.400 20:12:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:16.400 20:12:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:03:16.400 20:12:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:03:16.400 20:12:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:16.400 20:12:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:16.400 20:12:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:16.400 20:12:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:16.400 20:12:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:16.400 20:12:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:03:16.400 20:12:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:16.400 20:12:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:16.400 20:12:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:16.400 20:12:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:16.400 20:12:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:16.400 20:12:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:03:16.400 20:12:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:03:16.400 20:12:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:03:16.400 20:12:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:16.400 20:12:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:03:18.942 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:18.942 0000:5f:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:18.942 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:18.942 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:18.942 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:18.942 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:18.942 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:18.942 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:18.942 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:18.942 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:18.942 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:18.942 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:18.942 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:18.942 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:18.942 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:18.942 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:18.942 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:18.942 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:03:18.942 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:03:18.942 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:18.942 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:18.942 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:18.942 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:18.942 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:18.942 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:18.942 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:18.942 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:18.942 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:18.942 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:18.942 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:18.942 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:18.942 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:18.942 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:18.942 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:18.942 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:18.942 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:18.942 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.942 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.942 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381176 kB' 'MemFree: 168483192 kB' 'MemAvailable: 171746120 kB' 'Buffers: 4824 kB' 'Cached: 16658028 kB' 'SwapCached: 0 kB' 'Active: 13782784 kB' 'Inactive: 3533972 kB' 'Active(anon): 13056528 kB' 'Inactive(anon): 0 kB' 'Active(file): 726256 kB' 'Inactive(file): 3533972 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 656748 kB' 'Mapped: 208884 kB' 'Shmem: 12402624 kB' 'KReclaimable: 293832 kB' 'Slab: 927372 kB' 'SReclaimable: 293832 kB' 'SUnreclaim: 633540 kB' 'KernelStack: 20880 kB' 'PageTables: 9548 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030616 kB' 'Committed_AS: 14512092 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 318756 kB' 'VmallocChunk: 0 kB' 'Percpu: 87168 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3814356 kB' 'DirectMap2M: 52488192 kB' 'DirectMap1G: 145752064 kB' 00:03:18.942 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.942 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.942 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.942 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.942 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.942 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.942 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.942 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.942 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.942 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.942 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.942 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.942 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.942 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.942 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.942 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.942 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.942 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.942 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.942 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.942 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.942 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.942 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.942 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.942 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.942 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.942 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.942 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.942 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.942 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.942 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.942 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.942 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.942 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.942 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.943 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.943 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.943 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.943 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.943 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.943 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.943 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.943 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.943 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.943 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.943 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.943 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.943 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.943 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.943 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.943 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.943 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.943 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.943 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.943 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.943 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.943 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.943 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.943 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.943 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.943 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.943 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.943 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.943 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.943 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.943 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.943 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.943 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.943 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.943 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.943 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.943 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.943 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.943 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.943 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.943 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.943 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.943 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.943 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.943 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.943 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.943 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.943 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.943 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.943 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.943 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.943 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.943 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.943 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.943 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.943 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.943 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.943 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.943 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.943 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.943 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.943 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.943 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.943 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.943 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.943 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.943 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.943 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.943 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.943 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.943 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.943 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.943 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.943 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.943 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.943 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.943 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.943 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.943 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.943 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.943 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.943 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.943 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.943 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.943 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.943 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.943 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.943 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.943 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.943 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.943 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.943 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.943 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.943 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.943 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.943 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.943 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.943 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.943 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.943 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.943 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.943 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.943 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.943 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.943 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.943 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.943 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.943 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.943 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.944 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.944 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.944 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.944 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.944 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.944 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.944 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.944 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.944 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.944 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.944 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.944 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.944 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.944 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.944 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.944 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.944 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.944 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:18.944 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:18.944 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:18.944 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:18.944 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:18.944 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:18.944 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:18.944 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:18.944 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:18.944 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:18.944 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:18.944 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:18.944 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:18.944 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.944 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.944 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381176 kB' 'MemFree: 168483540 kB' 'MemAvailable: 171746468 kB' 'Buffers: 4824 kB' 'Cached: 16658032 kB' 'SwapCached: 0 kB' 'Active: 13783460 kB' 'Inactive: 3533972 kB' 'Active(anon): 13057204 kB' 'Inactive(anon): 0 kB' 'Active(file): 726256 kB' 'Inactive(file): 3533972 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 657652 kB' 'Mapped: 208876 kB' 'Shmem: 12402628 kB' 'KReclaimable: 293832 kB' 'Slab: 927356 kB' 'SReclaimable: 293832 kB' 'SUnreclaim: 633524 kB' 'KernelStack: 21088 kB' 'PageTables: 10080 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030616 kB' 'Committed_AS: 14511356 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 318756 kB' 'VmallocChunk: 0 kB' 'Percpu: 87168 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3814356 kB' 'DirectMap2M: 52488192 kB' 'DirectMap1G: 145752064 kB' 00:03:18.944 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.944 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.944 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.944 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.944 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.944 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.944 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.944 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.944 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.944 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.944 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.944 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.944 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.944 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.944 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.944 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.944 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.944 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.944 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.944 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.944 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.944 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.944 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.944 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.944 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.944 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.944 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.944 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.944 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.944 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.944 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.944 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.944 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.944 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.944 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.944 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.944 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.944 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.944 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.944 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.944 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.944 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.944 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.944 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.944 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.944 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.944 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.944 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.944 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.944 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.944 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.944 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.944 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.944 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.944 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.944 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.944 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.944 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.944 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.944 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.944 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.944 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.944 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.944 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.944 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.944 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.944 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.944 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.944 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.944 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.944 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.944 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.944 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.944 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.944 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.944 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.944 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.944 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.945 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.945 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.945 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.945 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.945 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.945 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.945 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.945 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.945 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.945 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.945 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.945 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.945 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.945 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.945 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.945 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.945 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.945 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.945 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.945 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.945 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.945 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.945 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.945 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.945 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.945 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.945 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.945 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.945 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.945 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.945 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.945 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.945 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.945 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.945 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.945 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.945 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.945 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.945 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.945 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.945 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.945 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.945 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.945 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.945 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.945 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.945 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.945 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.945 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.945 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.945 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.945 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.945 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.945 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.945 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.945 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.945 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.945 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.945 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.945 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.945 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.945 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.945 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.945 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.945 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.945 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.945 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.945 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.945 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.945 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.945 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.945 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.945 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.945 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.945 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.945 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.945 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.945 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.945 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.945 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.945 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.945 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.945 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.945 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.945 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.945 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.945 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.945 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.945 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.945 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.945 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.945 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.945 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.945 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.945 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.945 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.945 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.945 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.945 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.945 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.945 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.945 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.945 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.945 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.945 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.945 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.945 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.945 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.945 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.945 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.945 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.945 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.945 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.945 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.945 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.945 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.945 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.945 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.945 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.945 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.945 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.945 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.946 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.946 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.946 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.946 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.946 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.946 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:18.946 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:18.946 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:18.946 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:18.946 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:18.946 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:18.946 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:18.946 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:18.946 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:18.946 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:18.946 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:18.946 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:18.946 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:18.946 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.946 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.946 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381176 kB' 'MemFree: 168484936 kB' 'MemAvailable: 171747864 kB' 'Buffers: 4824 kB' 'Cached: 16658052 kB' 'SwapCached: 0 kB' 'Active: 13782300 kB' 'Inactive: 3533972 kB' 'Active(anon): 13056044 kB' 'Inactive(anon): 0 kB' 'Active(file): 726256 kB' 'Inactive(file): 3533972 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 656608 kB' 'Mapped: 208796 kB' 'Shmem: 12402648 kB' 'KReclaimable: 293832 kB' 'Slab: 927484 kB' 'SReclaimable: 293832 kB' 'SUnreclaim: 633652 kB' 'KernelStack: 20816 kB' 'PageTables: 9372 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030616 kB' 'Committed_AS: 14511380 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 318548 kB' 'VmallocChunk: 0 kB' 'Percpu: 87168 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3814356 kB' 'DirectMap2M: 52488192 kB' 'DirectMap1G: 145752064 kB' 00:03:18.946 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.946 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.946 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.946 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.946 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.946 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.946 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.946 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.946 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.946 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.946 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.946 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.946 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.946 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.946 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.946 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.946 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.946 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.946 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.946 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.946 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.946 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.946 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.946 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.946 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.946 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.946 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.946 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.946 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.946 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.946 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.946 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.946 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.946 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.946 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.946 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.946 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.946 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.946 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.946 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.946 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.946 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.946 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.946 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.946 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.946 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.946 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.946 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.946 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.946 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.946 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.946 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.946 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.946 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.946 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.946 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.946 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.946 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.946 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.946 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.946 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.946 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.946 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.946 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.947 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.947 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.947 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.947 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.947 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.947 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.947 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.947 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.947 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.947 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.947 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.947 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.947 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.947 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.947 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.947 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.947 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.947 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.947 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.947 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.947 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.947 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.947 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.947 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.947 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.947 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.947 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.947 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.947 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.947 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.947 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.947 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.947 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.947 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.947 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.947 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.947 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.947 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.947 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.947 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.947 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.947 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.947 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.947 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.947 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.947 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.947 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.947 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.947 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.947 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.947 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.947 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.947 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.947 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.947 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.947 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.947 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.947 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.947 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.947 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.947 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.947 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.947 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.947 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.947 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.947 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.947 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.947 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.947 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.947 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.947 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.947 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.947 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.947 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.947 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.947 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.947 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.947 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.947 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.947 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.947 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.947 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.947 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.947 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.947 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.947 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.947 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.947 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.947 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.947 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.947 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.947 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.947 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.947 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.947 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.947 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.947 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.947 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.947 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.947 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.947 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.947 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.947 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.947 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.947 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.947 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.947 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.947 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.947 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.947 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.947 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.947 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.947 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.947 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.947 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.947 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.947 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.947 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.947 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.947 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.947 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.947 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.947 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.948 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.948 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.948 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.948 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.948 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.948 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.948 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.948 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.948 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.948 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.948 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.948 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.948 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.948 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.948 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:18.948 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:18.948 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:18.948 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:18.948 nr_hugepages=1024 00:03:18.948 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:18.948 resv_hugepages=0 00:03:18.948 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:18.948 surplus_hugepages=0 00:03:18.948 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:18.948 anon_hugepages=0 00:03:18.948 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:18.948 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:18.948 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:18.948 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:18.948 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:18.948 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:18.948 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:18.948 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:18.948 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:18.948 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:18.948 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:18.948 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:18.948 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.948 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.948 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381176 kB' 'MemFree: 168483384 kB' 'MemAvailable: 171746312 kB' 'Buffers: 4824 kB' 'Cached: 16658072 kB' 'SwapCached: 0 kB' 'Active: 13782672 kB' 'Inactive: 3533972 kB' 'Active(anon): 13056416 kB' 'Inactive(anon): 0 kB' 'Active(file): 726256 kB' 'Inactive(file): 3533972 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 656924 kB' 'Mapped: 208796 kB' 'Shmem: 12402668 kB' 'KReclaimable: 293832 kB' 'Slab: 927484 kB' 'SReclaimable: 293832 kB' 'SUnreclaim: 633652 kB' 'KernelStack: 20752 kB' 'PageTables: 9648 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030616 kB' 'Committed_AS: 14509916 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 318532 kB' 'VmallocChunk: 0 kB' 'Percpu: 87168 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3814356 kB' 'DirectMap2M: 52488192 kB' 'DirectMap1G: 145752064 kB' 00:03:18.948 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.948 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.948 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.948 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.948 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.948 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.948 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.948 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.948 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.948 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.948 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.948 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.948 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.948 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.948 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.948 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.948 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.948 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.948 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.948 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.948 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.948 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.948 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.948 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.948 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.948 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.948 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.948 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.948 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.948 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.948 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.948 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.948 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.948 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.948 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.948 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.948 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.948 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.948 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.948 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.948 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.948 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.948 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.948 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.948 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.948 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.948 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.948 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.948 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.948 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.948 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.948 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.948 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.948 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.948 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.948 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.948 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.948 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.948 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.948 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.948 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.948 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.948 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.948 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.948 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.948 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.948 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.948 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.948 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.948 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.948 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.948 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.948 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.949 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.949 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.949 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.949 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.949 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.949 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.949 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.949 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.949 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.949 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.949 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.949 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.949 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.949 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.949 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.949 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.949 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.949 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.949 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.949 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.949 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.949 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.949 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.949 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.949 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.949 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.949 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.949 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.949 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.949 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.949 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.949 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.949 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.949 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.949 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.949 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.949 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.949 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.949 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.949 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.949 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.949 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.949 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.949 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.949 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.949 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.949 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.949 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.949 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.949 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.949 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.949 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.949 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.949 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.949 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.949 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.949 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.949 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.949 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.949 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.949 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.949 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.949 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.949 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.949 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.949 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.949 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.949 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.949 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.949 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.949 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.949 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.949 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.949 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.949 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.949 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.949 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.949 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.949 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.949 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.949 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.949 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.949 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.949 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.949 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.949 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.949 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.949 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.949 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.949 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.949 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.949 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.949 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.949 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.949 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.949 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.949 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.949 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.949 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.949 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.949 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.949 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.949 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.949 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.949 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.949 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.949 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.949 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.949 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.949 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.949 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.949 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.949 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.949 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.949 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.949 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.949 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.949 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.949 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.949 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.949 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:18.949 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:18.950 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:18.950 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:18.950 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:18.950 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:18.950 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:18.950 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:18.950 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:18.950 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:18.950 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:18.950 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:18.950 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:18.950 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:18.950 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:18.950 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:03:18.950 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:18.950 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:18.950 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:18.950 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:18.950 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:18.950 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:18.950 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:18.950 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.950 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.950 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97615628 kB' 'MemFree: 86360772 kB' 'MemUsed: 11254856 kB' 'SwapCached: 0 kB' 'Active: 8359332 kB' 'Inactive: 266804 kB' 'Active(anon): 7924732 kB' 'Inactive(anon): 0 kB' 'Active(file): 434600 kB' 'Inactive(file): 266804 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8205520 kB' 'Mapped: 101340 kB' 'AnonPages: 423724 kB' 'Shmem: 7504116 kB' 'KernelStack: 12824 kB' 'PageTables: 6200 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 134748 kB' 'Slab: 433732 kB' 'SReclaimable: 134748 kB' 'SUnreclaim: 298984 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:18.950 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.950 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.950 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.950 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.950 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.950 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.950 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.950 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.950 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.950 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.950 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.950 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.950 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.950 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.950 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.950 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.950 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.950 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.950 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.950 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.950 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.950 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.950 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.950 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.950 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.950 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.950 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.950 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.950 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.950 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.950 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.950 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.950 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.950 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.950 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.950 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.950 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.950 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.950 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.950 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.950 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.950 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.950 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.950 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.950 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.950 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.950 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.950 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.950 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.950 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.950 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.950 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.950 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.950 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.950 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.950 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.950 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.950 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.950 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.950 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.950 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.950 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.950 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.950 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.950 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.950 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.950 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.950 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.950 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.950 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.950 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.950 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.950 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.950 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.950 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.950 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.950 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.950 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.950 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.950 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.950 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.950 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.950 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.950 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.950 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.950 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.950 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.950 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.950 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.951 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.951 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.951 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.951 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.951 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.951 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.951 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.951 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.951 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.951 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.951 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.951 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.951 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.951 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.951 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.951 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.951 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.951 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.951 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.951 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.951 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.951 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.951 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.951 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.951 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.951 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.951 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.951 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.951 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.951 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.951 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.299 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.300 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:19.300 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.300 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.300 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.300 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:19.300 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.300 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.300 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.300 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:19.300 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.300 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.300 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.300 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:19.300 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.300 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.300 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.300 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:19.300 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.300 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.300 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.300 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:19.300 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.300 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.300 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.300 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:19.300 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:19.300 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:19.300 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:19.300 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:19.300 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:19.300 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:19.300 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:03:19.300 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:19.300 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:19.300 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:19.300 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:19.300 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:19.300 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:19.300 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:19.300 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.300 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.300 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93765548 kB' 'MemFree: 82121388 kB' 'MemUsed: 11644160 kB' 'SwapCached: 0 kB' 'Active: 5423808 kB' 'Inactive: 3267168 kB' 'Active(anon): 5132152 kB' 'Inactive(anon): 0 kB' 'Active(file): 291656 kB' 'Inactive(file): 3267168 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8457420 kB' 'Mapped: 107456 kB' 'AnonPages: 233700 kB' 'Shmem: 4898596 kB' 'KernelStack: 8232 kB' 'PageTables: 3884 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 159084 kB' 'Slab: 493752 kB' 'SReclaimable: 159084 kB' 'SUnreclaim: 334668 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:19.300 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.300 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:19.300 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.300 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.300 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.300 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:19.300 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.300 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.300 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.300 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:19.300 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.300 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.300 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.300 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:19.300 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.300 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.300 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.300 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:19.300 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.300 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.300 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.300 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:19.300 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.300 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.300 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.300 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:19.300 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.300 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.300 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.300 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:19.300 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.300 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.300 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.300 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:19.300 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.300 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.300 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.300 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:19.300 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.300 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.300 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.300 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:19.300 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.300 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.300 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.300 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:19.300 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.300 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.300 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.300 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:19.300 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.300 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.300 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.300 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:19.300 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.300 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.300 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.300 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:19.300 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.300 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.300 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.300 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:19.301 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.301 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.301 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.301 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:19.301 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.301 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.301 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.301 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:19.301 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.301 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.301 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.301 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:19.301 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.301 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.301 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.301 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:19.301 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.301 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.301 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.301 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:19.301 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.301 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.301 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.301 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:19.301 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.301 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.301 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.301 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:19.301 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.301 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.301 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.301 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:19.301 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.301 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.301 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.301 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:19.301 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.301 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.301 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.301 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:19.301 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.301 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.301 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.301 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:19.301 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.301 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.301 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.301 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:19.301 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.301 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.301 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.301 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:19.301 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.301 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.301 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.301 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:19.301 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.301 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.301 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.301 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:19.301 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.301 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.301 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.301 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:19.301 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.301 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.301 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.301 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:19.301 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.301 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.301 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.301 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:19.301 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.301 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.301 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.301 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:19.301 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.301 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.301 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.301 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:19.301 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.301 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.301 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.301 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:19.301 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:19.301 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:19.301 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:19.301 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:19.301 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:19.301 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:19.301 node0=512 expecting 512 00:03:19.301 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:19.301 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:19.301 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:19.301 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:19.301 node1=512 expecting 512 00:03:19.301 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:19.301 00:03:19.301 real 0m2.834s 00:03:19.301 user 0m1.056s 00:03:19.301 sys 0m1.792s 00:03:19.301 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:19.301 20:12:31 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:19.301 ************************************ 00:03:19.301 END TEST per_node_1G_alloc 00:03:19.301 ************************************ 00:03:19.301 20:12:31 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:03:19.301 20:12:31 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:19.301 20:12:31 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:19.301 20:12:31 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:19.301 ************************************ 00:03:19.301 START TEST even_2G_alloc 00:03:19.301 ************************************ 00:03:19.301 20:12:32 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1121 -- # even_2G_alloc 00:03:19.301 20:12:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:03:19.301 20:12:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:19.301 20:12:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:19.301 20:12:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:19.301 20:12:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:19.301 20:12:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:19.301 20:12:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:19.302 20:12:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:19.302 20:12:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:19.302 20:12:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:19.302 20:12:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:19.302 20:12:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:19.302 20:12:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:19.302 20:12:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:19.302 20:12:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:19.302 20:12:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:19.302 20:12:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:03:19.302 20:12:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:19.302 20:12:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:19.302 20:12:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:19.302 20:12:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:19.302 20:12:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:19.302 20:12:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:19.302 20:12:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:03:19.302 20:12:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:03:19.302 20:12:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:03:19.302 20:12:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:19.302 20:12:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:03:22.596 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:22.596 0000:5f:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:22.596 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:22.596 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:22.596 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:22.596 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:22.596 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:22.596 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:22.596 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:22.596 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:22.596 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:22.596 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:22.596 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:22.596 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:22.596 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:22.596 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:22.596 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:22.596 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:03:22.596 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:22.596 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:22.596 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:22.596 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:22.596 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:22.596 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:22.596 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:22.596 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:22.596 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:22.596 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:22.596 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:22.596 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:22.596 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:22.596 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:22.596 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:22.596 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:22.596 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:22.596 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.596 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.596 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381176 kB' 'MemFree: 168508216 kB' 'MemAvailable: 171771144 kB' 'Buffers: 4824 kB' 'Cached: 16658188 kB' 'SwapCached: 0 kB' 'Active: 13781156 kB' 'Inactive: 3533972 kB' 'Active(anon): 13054900 kB' 'Inactive(anon): 0 kB' 'Active(file): 726256 kB' 'Inactive(file): 3533972 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 654876 kB' 'Mapped: 208000 kB' 'Shmem: 12402784 kB' 'KReclaimable: 293832 kB' 'Slab: 927500 kB' 'SReclaimable: 293832 kB' 'SUnreclaim: 633668 kB' 'KernelStack: 20672 kB' 'PageTables: 9196 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030616 kB' 'Committed_AS: 14497920 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 318372 kB' 'VmallocChunk: 0 kB' 'Percpu: 87168 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3814356 kB' 'DirectMap2M: 52488192 kB' 'DirectMap1G: 145752064 kB' 00:03:22.596 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.596 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.596 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.596 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.596 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.596 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.596 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.596 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.596 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.596 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.596 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.596 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.596 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.596 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.596 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.596 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.596 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.596 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.596 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.596 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.596 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.596 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.596 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.596 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.596 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.596 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.596 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.597 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.597 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.597 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.597 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.597 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.597 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.597 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.597 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.597 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.597 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.597 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.597 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.597 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.597 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.597 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.597 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.597 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.597 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.597 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.597 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.597 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.597 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.597 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.597 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.597 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.597 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.597 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.597 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.597 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.597 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.597 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.597 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.597 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.597 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.597 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.597 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.597 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.597 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.597 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.597 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.597 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.597 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.597 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.597 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.597 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.597 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.597 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.597 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.597 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.597 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.597 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.597 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.597 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.597 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.597 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.597 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.597 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.597 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.597 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.597 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.597 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.597 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.597 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.597 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.597 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.597 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.597 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.597 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.597 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.597 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.597 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.597 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.597 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.597 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.597 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.597 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.597 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.597 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.597 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.597 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.597 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.597 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.597 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.597 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.597 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.597 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.597 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.597 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.597 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.597 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.597 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.597 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.597 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.597 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.597 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.597 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.597 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.597 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.597 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.597 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.597 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.597 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.597 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.597 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.597 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.597 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.597 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.597 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.597 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.597 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.597 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.597 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.597 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.597 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.597 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.597 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.597 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.597 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.597 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.597 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.597 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.597 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.597 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.597 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.598 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.598 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.598 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.598 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.598 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.598 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.598 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.598 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.598 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.598 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.598 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:22.598 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:22.598 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:22.598 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:22.598 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:22.598 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:22.598 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:22.598 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:22.598 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:22.598 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:22.598 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:22.598 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:22.598 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:22.598 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.598 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.598 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381176 kB' 'MemFree: 168508452 kB' 'MemAvailable: 171771380 kB' 'Buffers: 4824 kB' 'Cached: 16658192 kB' 'SwapCached: 0 kB' 'Active: 13780192 kB' 'Inactive: 3533972 kB' 'Active(anon): 13053936 kB' 'Inactive(anon): 0 kB' 'Active(file): 726256 kB' 'Inactive(file): 3533972 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 654396 kB' 'Mapped: 207912 kB' 'Shmem: 12402788 kB' 'KReclaimable: 293832 kB' 'Slab: 927472 kB' 'SReclaimable: 293832 kB' 'SUnreclaim: 633640 kB' 'KernelStack: 20656 kB' 'PageTables: 9128 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030616 kB' 'Committed_AS: 14497936 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 318356 kB' 'VmallocChunk: 0 kB' 'Percpu: 87168 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3814356 kB' 'DirectMap2M: 52488192 kB' 'DirectMap1G: 145752064 kB' 00:03:22.598 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.598 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.598 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.598 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.598 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.598 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.598 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.598 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.598 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.598 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.598 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.598 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.598 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.598 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.598 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.598 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.598 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.598 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.598 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.598 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.598 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.598 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.598 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.598 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.598 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.598 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.598 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.598 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.598 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.598 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.598 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.598 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.598 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.598 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.598 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.598 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.598 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.598 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.598 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.598 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.598 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.598 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.598 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.598 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.598 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.598 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.598 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.598 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.598 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.598 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.598 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.598 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.598 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.598 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.598 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.598 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.598 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.598 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.598 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.598 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.598 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.598 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.598 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.598 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.598 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.598 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.598 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.598 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.598 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.598 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.598 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.598 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.598 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.598 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.598 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.598 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.598 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.598 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.598 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.598 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.598 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.598 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.598 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.598 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.598 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.598 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.598 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.599 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.599 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.599 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.599 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.599 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.599 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.599 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.599 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.599 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.599 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.599 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.599 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.599 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.599 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.599 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.599 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.599 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.599 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.599 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.599 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.599 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.599 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.599 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.599 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.599 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.599 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.599 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.599 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.599 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.599 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.599 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.599 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.599 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.599 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.599 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.599 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.599 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.599 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.599 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.599 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.599 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.599 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.599 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.599 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.599 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.599 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.599 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.599 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.599 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.599 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.599 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.599 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.599 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.599 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.599 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.599 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.599 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.599 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.599 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.599 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.599 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.599 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.599 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.599 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.599 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.599 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.599 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.599 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.599 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.599 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.599 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.599 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.599 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.599 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.599 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.599 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.599 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.599 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.599 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.599 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.599 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.599 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.599 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.599 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.599 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.599 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.599 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.599 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.599 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.599 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.599 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.599 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.599 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.599 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.599 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.599 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.599 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.599 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.599 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.599 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.599 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.599 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.599 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.599 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.599 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.599 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.599 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.599 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.599 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.599 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.599 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.599 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.599 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.599 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.599 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.599 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.599 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.599 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.599 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:22.599 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:22.599 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:22.599 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:22.599 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:22.599 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:22.599 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:22.600 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:22.600 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:22.600 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:22.600 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:22.600 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:22.600 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:22.600 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.600 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.600 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381176 kB' 'MemFree: 168507696 kB' 'MemAvailable: 171770624 kB' 'Buffers: 4824 kB' 'Cached: 16658212 kB' 'SwapCached: 0 kB' 'Active: 13780208 kB' 'Inactive: 3533972 kB' 'Active(anon): 13053952 kB' 'Inactive(anon): 0 kB' 'Active(file): 726256 kB' 'Inactive(file): 3533972 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 654396 kB' 'Mapped: 207912 kB' 'Shmem: 12402808 kB' 'KReclaimable: 293832 kB' 'Slab: 927472 kB' 'SReclaimable: 293832 kB' 'SUnreclaim: 633640 kB' 'KernelStack: 20656 kB' 'PageTables: 9128 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030616 kB' 'Committed_AS: 14497960 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 318372 kB' 'VmallocChunk: 0 kB' 'Percpu: 87168 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3814356 kB' 'DirectMap2M: 52488192 kB' 'DirectMap1G: 145752064 kB' 00:03:22.600 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.600 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.600 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.600 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.600 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.600 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.600 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.600 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.600 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.600 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.600 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.600 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.600 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.600 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.600 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.600 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.600 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.600 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.600 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.600 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.600 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.600 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.600 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.600 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.600 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.600 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.600 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.600 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.600 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.600 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.600 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.600 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.600 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.600 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.600 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.600 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.600 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.600 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.600 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.600 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.600 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.600 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.600 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.600 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.600 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.600 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.600 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.600 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.600 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.600 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.600 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.600 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.600 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.600 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.600 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.600 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.600 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.600 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.600 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.600 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.600 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.600 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.600 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.600 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.600 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.600 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.600 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.600 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.600 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.600 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.600 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.600 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.600 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.600 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.600 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.600 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.600 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.600 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.600 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.600 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.600 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.600 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.600 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.601 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.601 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.601 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.601 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.601 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.601 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.601 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.601 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.601 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.601 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.601 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.601 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.601 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.601 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.601 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.601 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.601 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.601 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.601 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.601 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.601 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.601 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.601 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.601 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.601 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.601 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.601 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.601 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.601 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.601 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.601 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.601 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.601 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.601 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.601 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.601 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.601 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.601 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.601 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.601 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.601 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.601 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.601 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.601 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.601 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.601 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.601 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.601 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.601 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.601 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.601 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.601 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.601 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.601 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.601 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.601 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.601 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.601 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.601 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.601 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.601 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.601 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.601 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.601 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.601 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.601 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.601 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.601 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.601 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.601 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.601 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.601 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.601 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.601 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.601 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.601 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.601 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.601 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.601 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.601 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.601 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.601 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.601 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.601 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.601 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.601 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.601 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.601 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.601 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.601 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.601 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.601 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.601 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.601 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.601 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.601 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.601 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.601 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.601 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.601 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.601 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.601 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.601 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.601 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.601 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.601 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.601 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.601 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.601 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.601 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.601 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.601 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.601 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.601 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.601 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.601 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.601 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.601 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.601 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:22.601 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:22.601 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:22.601 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:22.601 nr_hugepages=1024 00:03:22.601 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:22.601 resv_hugepages=0 00:03:22.601 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:22.601 surplus_hugepages=0 00:03:22.601 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:22.601 anon_hugepages=0 00:03:22.602 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:22.602 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:22.602 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:22.602 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:22.602 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:22.602 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:22.602 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:22.602 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:22.602 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:22.602 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:22.602 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:22.602 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:22.602 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.602 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.602 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381176 kB' 'MemFree: 168507696 kB' 'MemAvailable: 171770624 kB' 'Buffers: 4824 kB' 'Cached: 16658252 kB' 'SwapCached: 0 kB' 'Active: 13779896 kB' 'Inactive: 3533972 kB' 'Active(anon): 13053640 kB' 'Inactive(anon): 0 kB' 'Active(file): 726256 kB' 'Inactive(file): 3533972 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 653996 kB' 'Mapped: 207912 kB' 'Shmem: 12402848 kB' 'KReclaimable: 293832 kB' 'Slab: 927472 kB' 'SReclaimable: 293832 kB' 'SUnreclaim: 633640 kB' 'KernelStack: 20640 kB' 'PageTables: 9072 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030616 kB' 'Committed_AS: 14497980 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 318372 kB' 'VmallocChunk: 0 kB' 'Percpu: 87168 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3814356 kB' 'DirectMap2M: 52488192 kB' 'DirectMap1G: 145752064 kB' 00:03:22.602 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.602 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.602 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.602 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.602 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.602 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.602 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.602 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.602 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.602 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.602 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.602 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.602 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.602 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.602 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.602 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.602 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.602 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.602 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.602 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.602 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.602 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.602 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.602 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.602 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.602 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.602 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.602 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.602 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.602 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.602 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.602 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.602 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.602 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.602 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.602 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.602 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.602 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.602 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.602 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.602 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.602 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.602 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.602 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.602 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.602 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.602 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.602 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.602 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.602 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.602 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.602 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.602 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.602 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.602 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.602 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.602 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.602 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.602 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.602 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.602 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.602 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.602 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.602 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.602 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.602 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.602 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.602 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.602 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.602 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.602 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.602 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.602 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.602 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.602 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.602 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.602 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.602 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.602 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.602 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.602 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.602 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.602 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.602 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.602 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.602 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.602 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.602 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.602 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.602 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.602 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.602 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.602 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.602 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.602 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.602 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.602 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.602 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.602 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.603 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.603 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.603 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.603 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.603 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.603 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.603 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.603 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.603 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.603 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.603 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.603 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.603 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.603 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.603 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.603 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.603 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.603 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.603 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.603 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.603 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.603 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.603 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.603 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.603 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.603 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.603 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.603 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.603 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.603 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.603 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.603 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.603 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.603 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.603 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.603 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.603 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.603 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.603 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.603 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.603 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.603 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.603 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.603 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.603 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.603 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.603 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.603 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.603 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.603 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.603 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.603 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.603 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.603 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.603 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.603 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.603 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.603 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.603 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.603 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.603 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.603 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.603 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.603 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.603 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.603 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.603 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.603 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.603 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.603 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.603 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.603 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.603 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.603 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.603 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.603 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.603 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.603 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.603 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.603 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.603 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.603 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.603 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.603 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.603 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.603 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.603 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.603 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.603 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.603 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.603 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.603 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.603 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.603 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.603 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:22.603 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:22.603 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:22.603 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:22.603 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:22.603 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:22.603 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:22.603 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:22.603 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:22.603 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:22.603 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:22.603 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:22.603 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:22.603 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:22.603 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:22.603 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:03:22.603 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:22.603 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:22.603 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:22.603 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:22.603 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:22.603 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:22.603 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:22.604 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97615628 kB' 'MemFree: 86394416 kB' 'MemUsed: 11221212 kB' 'SwapCached: 0 kB' 'Active: 8356692 kB' 'Inactive: 266804 kB' 'Active(anon): 7922092 kB' 'Inactive(anon): 0 kB' 'Active(file): 434600 kB' 'Inactive(file): 266804 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8205520 kB' 'Mapped: 100444 kB' 'AnonPages: 421072 kB' 'Shmem: 7504116 kB' 'KernelStack: 12472 kB' 'PageTables: 5436 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 134748 kB' 'Slab: 433496 kB' 'SReclaimable: 134748 kB' 'SUnreclaim: 298748 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:22.604 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.604 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.604 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.604 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.604 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.604 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.604 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.604 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.604 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.604 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.604 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.604 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.604 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.604 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.604 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.604 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.604 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.604 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.604 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.604 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.604 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.604 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.604 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.604 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.604 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.604 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.604 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.604 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.604 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.604 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.604 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.604 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.604 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.604 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.604 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.604 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.604 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.604 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.604 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.604 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.604 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.604 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.604 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.604 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.604 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.604 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.604 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.604 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.604 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.604 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.604 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.604 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.604 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.604 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.604 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.604 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.604 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.604 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.604 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.604 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.604 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.604 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.604 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.604 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.604 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.604 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.604 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.604 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.604 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.604 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.604 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.604 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.604 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.604 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.604 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.604 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.604 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.604 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.604 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.604 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.604 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.604 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.604 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.604 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.604 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.604 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.604 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.604 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.604 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.604 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.604 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.604 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.604 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.604 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.604 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.604 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.604 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.604 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.604 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.604 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.604 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.604 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.604 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.604 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.604 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.604 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.604 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.604 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.604 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.604 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.604 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.604 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.604 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.604 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.604 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.604 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.604 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.604 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.604 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.604 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.604 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.605 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.605 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.605 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.605 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.605 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.605 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.605 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.605 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.605 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.605 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.605 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.605 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.605 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.605 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.605 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.605 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.605 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.605 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.605 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.605 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.605 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.605 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.605 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.605 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.605 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.605 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.605 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:22.605 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:22.605 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:22.605 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:22.605 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:22.605 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:22.605 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:22.605 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:03:22.605 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:22.605 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:22.605 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:22.605 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:22.605 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:22.605 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:22.605 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:22.605 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.605 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.605 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93765548 kB' 'MemFree: 82112780 kB' 'MemUsed: 11652768 kB' 'SwapCached: 0 kB' 'Active: 5423544 kB' 'Inactive: 3267168 kB' 'Active(anon): 5131888 kB' 'Inactive(anon): 0 kB' 'Active(file): 291656 kB' 'Inactive(file): 3267168 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8457580 kB' 'Mapped: 107468 kB' 'AnonPages: 233220 kB' 'Shmem: 4898756 kB' 'KernelStack: 8168 kB' 'PageTables: 3636 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 159084 kB' 'Slab: 493976 kB' 'SReclaimable: 159084 kB' 'SUnreclaim: 334892 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:22.605 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.605 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.605 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.605 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.605 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.605 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.605 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.605 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.605 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.605 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.605 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.605 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.605 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.605 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.605 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.605 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.605 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.605 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.605 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.605 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.605 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.605 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.605 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.605 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.605 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.605 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.605 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.605 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.605 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.605 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.605 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.605 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.605 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.605 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.605 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.605 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.605 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.605 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.605 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.605 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.605 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.605 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.605 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.605 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.605 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.605 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.605 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.605 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.605 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.605 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.605 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.605 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.605 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.605 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.605 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.605 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.606 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.606 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.606 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.606 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.606 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.606 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.606 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.606 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.606 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.606 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.606 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.606 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.606 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.606 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.606 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.606 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.606 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.606 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.606 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.606 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.606 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.606 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.606 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.606 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.606 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.606 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.606 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.606 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.606 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.606 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.606 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.606 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.606 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.606 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.606 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.606 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.606 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.606 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.606 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.606 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.606 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.606 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.606 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.606 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.606 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.606 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.606 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.606 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.606 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.606 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.606 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.606 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.606 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.606 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.606 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.606 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.606 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.606 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.606 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.606 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.606 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.606 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.606 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.606 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.606 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.606 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.606 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.606 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.606 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.606 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.606 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.606 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.606 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.606 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.606 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.606 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.606 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.606 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.606 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.606 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.606 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.606 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.606 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.606 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.606 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.606 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.606 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.606 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.606 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.606 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:22.606 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:22.606 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:22.606 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:22.606 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:22.606 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:22.606 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:22.606 node0=512 expecting 512 00:03:22.606 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:22.606 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:22.606 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:22.606 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:22.606 node1=512 expecting 512 00:03:22.606 20:12:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:22.606 00:03:22.606 real 0m3.166s 00:03:22.606 user 0m1.234s 00:03:22.606 sys 0m1.970s 00:03:22.606 20:12:35 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:22.606 20:12:35 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:22.606 ************************************ 00:03:22.606 END TEST even_2G_alloc 00:03:22.606 ************************************ 00:03:22.606 20:12:35 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:03:22.606 20:12:35 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:22.606 20:12:35 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:22.606 20:12:35 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:22.606 ************************************ 00:03:22.606 START TEST odd_alloc 00:03:22.606 ************************************ 00:03:22.606 20:12:35 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1121 -- # odd_alloc 00:03:22.606 20:12:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:03:22.606 20:12:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:03:22.606 20:12:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:22.606 20:12:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:22.606 20:12:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:03:22.606 20:12:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:22.606 20:12:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:22.606 20:12:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:22.606 20:12:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:03:22.606 20:12:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:22.607 20:12:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:22.607 20:12:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:22.607 20:12:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:22.607 20:12:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:22.607 20:12:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:22.607 20:12:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:22.607 20:12:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:03:22.607 20:12:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:22.607 20:12:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:22.607 20:12:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:03:22.607 20:12:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:22.607 20:12:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:22.607 20:12:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:22.607 20:12:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:03:22.607 20:12:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:03:22.607 20:12:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:03:22.607 20:12:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:22.607 20:12:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:03:25.145 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:25.145 0000:5f:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:25.145 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:25.145 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:25.145 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:25.145 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:25.145 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:25.145 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:25.145 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:25.145 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:25.145 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:25.145 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:25.145 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:25.145 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:25.145 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:25.145 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:25.145 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:25.145 20:12:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:03:25.145 20:12:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:03:25.145 20:12:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:25.145 20:12:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:25.145 20:12:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:25.145 20:12:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:25.145 20:12:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:25.145 20:12:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:25.145 20:12:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:25.145 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:25.145 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:25.145 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:25.145 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:25.145 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:25.145 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:25.145 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:25.145 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:25.145 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:25.145 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.145 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.145 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381176 kB' 'MemFree: 168490604 kB' 'MemAvailable: 171753532 kB' 'Buffers: 4824 kB' 'Cached: 16658348 kB' 'SwapCached: 0 kB' 'Active: 13781148 kB' 'Inactive: 3533972 kB' 'Active(anon): 13054892 kB' 'Inactive(anon): 0 kB' 'Active(file): 726256 kB' 'Inactive(file): 3533972 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 655308 kB' 'Mapped: 207984 kB' 'Shmem: 12402944 kB' 'KReclaimable: 293832 kB' 'Slab: 927820 kB' 'SReclaimable: 293832 kB' 'SUnreclaim: 633988 kB' 'KernelStack: 20704 kB' 'PageTables: 9260 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103029592 kB' 'Committed_AS: 14498568 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 318372 kB' 'VmallocChunk: 0 kB' 'Percpu: 87168 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3814356 kB' 'DirectMap2M: 52488192 kB' 'DirectMap1G: 145752064 kB' 00:03:25.145 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.145 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.145 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.145 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.145 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.145 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.145 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.145 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.145 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.145 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.145 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.145 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.145 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.145 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.145 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.145 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.145 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.145 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.145 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.145 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.145 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.145 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.145 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.145 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.145 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.145 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.145 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.145 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.145 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.145 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.145 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.145 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.145 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.145 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.145 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.145 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.145 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.145 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.145 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.145 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.145 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.145 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.145 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.145 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.145 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.145 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.145 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.145 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.145 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.145 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.145 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.145 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.145 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.145 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.145 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.145 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.145 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.145 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.145 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.145 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.145 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.145 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.145 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.145 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.145 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.145 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.146 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.146 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.146 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.146 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.146 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.146 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.146 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.146 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.146 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.146 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.146 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.146 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.146 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.146 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.146 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.146 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.146 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.146 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.146 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.146 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.146 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.146 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.146 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.146 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.146 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.146 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.146 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.146 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.146 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.146 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.146 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.146 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.146 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.146 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.146 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.146 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.146 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.146 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.146 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.146 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.146 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.146 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.146 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.146 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.146 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.146 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.146 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.146 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.146 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.146 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.146 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.146 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.146 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.146 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.146 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.146 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.146 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.146 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.146 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.146 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.146 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.146 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.146 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.146 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.146 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.146 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.146 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.146 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.146 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.146 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.146 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.146 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.146 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.146 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.146 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.146 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.146 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.146 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.146 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.146 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.146 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.146 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.146 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.146 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.146 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.146 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.146 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.146 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.146 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.146 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.146 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.146 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.146 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.146 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.146 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.146 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:25.146 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:25.146 20:12:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:25.146 20:12:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:25.146 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:25.146 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:25.146 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:25.146 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:25.146 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:25.146 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:25.146 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:25.146 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:25.146 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:25.146 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.146 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.146 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381176 kB' 'MemFree: 168491368 kB' 'MemAvailable: 171754296 kB' 'Buffers: 4824 kB' 'Cached: 16658352 kB' 'SwapCached: 0 kB' 'Active: 13780664 kB' 'Inactive: 3533972 kB' 'Active(anon): 13054408 kB' 'Inactive(anon): 0 kB' 'Active(file): 726256 kB' 'Inactive(file): 3533972 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 654836 kB' 'Mapped: 207936 kB' 'Shmem: 12402948 kB' 'KReclaimable: 293832 kB' 'Slab: 927820 kB' 'SReclaimable: 293832 kB' 'SUnreclaim: 633988 kB' 'KernelStack: 20656 kB' 'PageTables: 9116 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103029592 kB' 'Committed_AS: 14498584 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 318356 kB' 'VmallocChunk: 0 kB' 'Percpu: 87168 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3814356 kB' 'DirectMap2M: 52488192 kB' 'DirectMap1G: 145752064 kB' 00:03:25.146 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.146 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.146 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.146 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.146 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.146 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.147 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.147 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.147 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.147 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.147 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.147 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.147 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.147 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.147 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.147 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.147 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.147 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.147 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.147 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.147 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.147 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.147 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.147 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.147 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.147 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.147 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.147 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.147 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.147 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.147 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.147 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.147 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.147 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.147 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.147 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.147 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.147 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.147 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.147 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.147 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.147 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.147 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.147 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.147 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.147 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.147 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.147 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.147 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.147 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.147 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.147 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.147 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.147 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.147 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.147 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.147 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.147 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.147 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.147 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.147 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.147 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.147 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.147 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.147 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.147 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.147 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.147 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.147 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.147 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.147 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.147 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.147 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.147 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.147 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.147 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.147 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.147 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.147 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.147 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.147 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.147 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.147 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.147 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.147 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.147 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.147 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.147 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.147 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.147 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.147 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.147 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.147 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.147 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.147 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.147 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.147 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.147 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.147 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.147 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.147 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.147 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.147 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.147 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.147 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.147 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.147 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.147 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.147 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.147 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.147 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.147 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.147 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.147 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.147 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.147 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.147 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.147 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.147 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.147 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.147 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.147 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.147 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.147 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.147 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.147 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.147 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.147 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.147 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.147 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.147 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.147 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.147 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.147 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.147 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.148 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.148 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.148 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.148 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.148 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.148 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.148 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.148 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.148 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.148 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.148 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.148 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.148 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.148 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.148 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.148 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.148 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.148 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.148 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.148 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.148 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.148 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.148 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.148 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.148 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.148 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.148 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.148 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.148 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.148 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.148 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.148 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.148 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.148 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.148 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.148 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.148 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.148 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.148 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.148 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.148 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.148 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.148 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.148 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.148 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.148 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.148 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.148 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.148 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.148 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.148 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.148 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.148 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.148 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.148 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.148 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.148 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.148 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.148 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.148 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.148 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.148 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.148 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.148 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.148 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.148 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.148 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.148 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.148 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.148 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.148 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:25.148 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:25.148 20:12:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:25.148 20:12:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:25.148 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:25.148 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:25.148 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:25.148 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:25.148 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:25.148 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:25.148 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:25.148 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:25.148 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:25.148 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.148 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.148 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381176 kB' 'MemFree: 168491116 kB' 'MemAvailable: 171754044 kB' 'Buffers: 4824 kB' 'Cached: 16658368 kB' 'SwapCached: 0 kB' 'Active: 13780680 kB' 'Inactive: 3533972 kB' 'Active(anon): 13054424 kB' 'Inactive(anon): 0 kB' 'Active(file): 726256 kB' 'Inactive(file): 3533972 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 654836 kB' 'Mapped: 207936 kB' 'Shmem: 12402964 kB' 'KReclaimable: 293832 kB' 'Slab: 927820 kB' 'SReclaimable: 293832 kB' 'SUnreclaim: 633988 kB' 'KernelStack: 20656 kB' 'PageTables: 9116 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103029592 kB' 'Committed_AS: 14498604 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 318356 kB' 'VmallocChunk: 0 kB' 'Percpu: 87168 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3814356 kB' 'DirectMap2M: 52488192 kB' 'DirectMap1G: 145752064 kB' 00:03:25.148 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.148 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.148 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.148 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.148 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.148 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.148 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.148 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.148 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.148 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.148 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.148 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.148 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.148 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.148 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.148 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.148 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.148 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.148 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.148 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.148 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.148 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.148 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.148 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.148 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.148 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.148 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.148 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.148 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.149 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.149 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.149 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.149 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.149 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.149 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.149 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.149 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.149 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.149 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.149 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.149 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.149 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.149 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.149 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.149 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.149 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.149 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.149 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.149 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.149 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.149 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.149 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.149 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.149 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.149 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.149 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.149 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.149 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.149 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.149 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.149 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.149 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.149 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.149 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.149 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.149 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.149 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.149 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.149 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.149 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.149 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.149 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.149 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.149 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.149 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.149 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.149 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.149 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.149 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.149 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.149 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.149 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.149 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.149 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.149 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.149 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.149 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.149 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.149 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.149 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.149 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.149 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.149 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.149 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.149 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.149 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.149 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.149 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.149 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.149 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.149 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.149 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.149 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.149 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.149 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.149 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.149 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.149 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.149 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.149 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.149 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.149 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.149 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.149 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.149 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.149 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.149 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.149 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.149 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.149 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.149 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.149 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.149 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.149 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.149 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.149 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.149 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.149 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.149 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.149 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.149 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.149 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.149 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.149 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.149 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.149 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.150 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.150 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.150 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.150 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.150 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.150 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.150 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.150 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.150 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.150 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.150 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.150 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.150 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.150 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.150 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.150 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.150 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.150 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.150 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.150 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.150 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.150 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.150 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.150 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.150 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.150 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.150 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.150 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.150 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.150 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.150 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.150 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.150 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.150 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.150 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.150 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.150 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.150 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.150 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.150 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.150 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.150 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.150 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.150 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.150 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.150 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.150 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.150 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.150 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.150 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.150 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.150 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.150 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.150 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.150 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.150 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.150 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.150 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.150 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.150 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.150 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.150 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.150 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.150 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.150 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.150 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:25.150 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:25.150 20:12:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:25.150 20:12:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:03:25.150 nr_hugepages=1025 00:03:25.150 20:12:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:25.150 resv_hugepages=0 00:03:25.150 20:12:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:25.150 surplus_hugepages=0 00:03:25.150 20:12:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:25.150 anon_hugepages=0 00:03:25.150 20:12:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:25.150 20:12:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:03:25.150 20:12:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:25.150 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:25.150 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:25.150 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:25.150 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:25.150 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:25.150 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:25.150 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:25.150 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:25.150 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:25.150 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.150 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.150 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381176 kB' 'MemFree: 168491116 kB' 'MemAvailable: 171754044 kB' 'Buffers: 4824 kB' 'Cached: 16658368 kB' 'SwapCached: 0 kB' 'Active: 13780680 kB' 'Inactive: 3533972 kB' 'Active(anon): 13054424 kB' 'Inactive(anon): 0 kB' 'Active(file): 726256 kB' 'Inactive(file): 3533972 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 654836 kB' 'Mapped: 207936 kB' 'Shmem: 12402964 kB' 'KReclaimable: 293832 kB' 'Slab: 927820 kB' 'SReclaimable: 293832 kB' 'SUnreclaim: 633988 kB' 'KernelStack: 20656 kB' 'PageTables: 9116 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103029592 kB' 'Committed_AS: 14498624 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 318372 kB' 'VmallocChunk: 0 kB' 'Percpu: 87168 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3814356 kB' 'DirectMap2M: 52488192 kB' 'DirectMap1G: 145752064 kB' 00:03:25.150 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.150 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.150 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.150 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.150 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.150 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.150 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.150 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.150 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.150 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.150 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.150 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.150 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.150 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.150 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.150 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.150 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.150 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.150 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.150 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.150 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.150 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.150 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.150 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.150 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.150 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.150 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.150 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.151 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.151 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.151 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.151 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.151 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.151 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.151 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.151 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.151 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.151 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.151 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.151 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.151 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.151 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.151 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.151 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.151 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.151 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.151 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.151 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.151 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.151 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.151 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.151 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.151 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.151 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.151 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.151 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.151 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.151 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.151 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.151 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.151 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.151 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.151 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.151 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.151 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.151 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.151 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.151 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.151 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.151 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.151 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.151 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.151 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.151 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.151 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.151 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.151 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.151 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.151 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.151 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.151 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.151 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.151 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.151 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.151 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.151 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.151 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.151 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.151 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.151 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.151 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.151 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.151 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.151 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.151 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.151 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.151 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.151 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.151 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.151 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.151 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.151 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.151 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.151 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.151 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.151 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.151 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.151 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.151 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.151 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.151 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.151 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.151 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.151 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.151 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.151 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.151 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.151 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.151 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.151 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.151 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.413 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.413 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.413 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.413 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.413 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.413 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.413 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.413 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.413 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.413 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.413 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.413 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.413 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.413 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.413 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.413 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.413 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.413 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.413 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.413 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.413 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.413 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.413 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.413 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.413 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.413 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.413 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.413 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.413 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.413 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.413 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.413 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.413 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.413 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.413 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.413 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.413 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.413 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.413 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.413 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.413 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.413 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.413 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.413 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.413 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.413 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.413 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.413 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.414 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.414 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.414 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.414 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.414 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.414 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.414 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.414 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.414 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.414 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.414 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.414 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.414 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.414 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.414 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.414 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.414 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.414 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.414 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.414 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.414 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.414 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.414 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.414 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.414 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:03:25.414 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:25.414 20:12:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:25.414 20:12:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:25.414 20:12:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:03:25.414 20:12:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:25.414 20:12:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:25.414 20:12:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:25.414 20:12:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:03:25.414 20:12:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:25.414 20:12:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:25.414 20:12:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:25.414 20:12:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:25.414 20:12:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:25.414 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:25.414 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:03:25.414 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:25.414 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:25.414 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:25.414 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:25.414 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:25.414 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:25.414 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:25.414 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.414 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97615628 kB' 'MemFree: 86399736 kB' 'MemUsed: 11215892 kB' 'SwapCached: 0 kB' 'Active: 8356288 kB' 'Inactive: 266804 kB' 'Active(anon): 7921688 kB' 'Inactive(anon): 0 kB' 'Active(file): 434600 kB' 'Inactive(file): 266804 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8205524 kB' 'Mapped: 100468 kB' 'AnonPages: 420688 kB' 'Shmem: 7504120 kB' 'KernelStack: 12456 kB' 'PageTables: 5380 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 134748 kB' 'Slab: 433836 kB' 'SReclaimable: 134748 kB' 'SUnreclaim: 299088 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:25.414 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.414 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.414 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.414 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.414 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.414 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.414 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.414 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.414 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.414 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.414 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.414 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.414 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.414 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.414 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.414 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.414 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.414 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.414 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.414 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.414 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.414 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.414 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.414 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.414 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.414 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.414 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.414 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.414 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.414 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.414 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.414 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.414 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.414 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.414 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.414 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.414 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.414 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.414 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.414 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.414 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.414 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.414 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.414 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.414 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.414 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.414 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.414 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.414 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.414 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.414 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.414 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.414 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.414 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.414 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.414 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.414 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.414 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.414 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.414 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.414 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.414 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.414 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.414 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.414 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.414 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.414 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.414 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.414 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.414 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.414 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.415 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.415 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.415 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.415 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.415 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.415 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.415 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.415 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.415 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.415 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.415 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.415 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.415 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.415 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.415 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.415 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.415 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.415 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.415 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.415 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.415 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.415 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.415 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.415 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.415 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.415 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.415 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.415 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.415 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.415 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.415 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.415 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.415 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.415 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.415 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.415 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.415 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.415 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.415 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.415 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.415 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.415 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.415 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.415 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.415 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.415 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.415 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.415 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.415 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.415 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.415 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.415 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.415 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.415 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.415 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.415 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.415 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.415 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.415 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.415 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.415 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.415 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.415 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.415 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.415 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.415 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.415 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.415 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.415 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.415 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.415 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.415 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.415 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.415 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.415 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.415 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:25.415 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:25.415 20:12:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:25.415 20:12:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:25.415 20:12:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:25.415 20:12:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:25.415 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:25.415 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:03:25.415 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:25.415 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:25.415 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:25.415 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:25.415 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:25.415 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:25.415 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:25.415 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.415 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.415 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93765548 kB' 'MemFree: 82091492 kB' 'MemUsed: 11674056 kB' 'SwapCached: 0 kB' 'Active: 5424400 kB' 'Inactive: 3267168 kB' 'Active(anon): 5132744 kB' 'Inactive(anon): 0 kB' 'Active(file): 291656 kB' 'Inactive(file): 3267168 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8457728 kB' 'Mapped: 107468 kB' 'AnonPages: 234016 kB' 'Shmem: 4898904 kB' 'KernelStack: 8184 kB' 'PageTables: 3680 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 159084 kB' 'Slab: 493984 kB' 'SReclaimable: 159084 kB' 'SUnreclaim: 334900 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:03:25.415 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.415 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.415 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.415 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.415 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.415 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.415 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.415 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.415 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.415 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.415 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.415 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.415 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.415 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.415 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.415 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.415 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.415 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.415 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.415 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.415 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.415 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.415 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.415 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.415 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.415 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.415 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.416 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.416 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.416 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.416 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.416 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.416 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.416 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.416 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.416 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.416 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.416 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.416 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.416 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.416 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.416 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.416 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.416 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.416 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.416 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.416 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.416 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.416 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.416 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.416 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.416 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.416 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.416 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.416 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.416 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.416 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.416 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.416 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.416 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.416 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.416 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.416 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.416 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.416 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.416 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.416 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.416 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.416 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.416 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.416 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.416 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.416 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.416 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.416 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.416 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.416 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.416 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.416 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.416 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.416 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.416 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.416 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.416 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.416 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.416 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.416 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.416 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.416 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.416 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.416 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.416 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.416 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.416 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.416 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.416 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.416 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.416 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.416 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.416 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.416 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.416 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.416 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.416 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.416 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.416 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.416 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.416 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.416 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.416 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.416 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.416 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.416 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.416 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.416 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.416 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.416 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.416 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.416 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.416 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.416 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.416 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.416 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.416 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.416 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.416 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.416 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.416 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.416 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.416 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.416 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.416 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.416 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.416 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.416 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.416 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.416 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.416 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.416 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.416 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.416 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.416 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.416 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.416 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.416 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.416 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:25.416 20:12:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:25.416 20:12:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:25.416 20:12:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:25.416 20:12:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:25.416 20:12:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:25.416 20:12:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:03:25.416 node0=512 expecting 513 00:03:25.416 20:12:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:25.416 20:12:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:25.416 20:12:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:25.416 20:12:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:03:25.416 node1=513 expecting 512 00:03:25.417 20:12:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:03:25.417 00:03:25.417 real 0m2.915s 00:03:25.417 user 0m1.092s 00:03:25.417 sys 0m1.843s 00:03:25.417 20:12:38 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:25.417 20:12:38 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:25.417 ************************************ 00:03:25.417 END TEST odd_alloc 00:03:25.417 ************************************ 00:03:25.417 20:12:38 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:03:25.417 20:12:38 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:25.417 20:12:38 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:25.417 20:12:38 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:25.417 ************************************ 00:03:25.417 START TEST custom_alloc 00:03:25.417 ************************************ 00:03:25.417 20:12:38 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1121 -- # custom_alloc 00:03:25.417 20:12:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:03:25.417 20:12:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:03:25.417 20:12:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:03:25.417 20:12:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:03:25.417 20:12:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:03:25.417 20:12:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:03:25.417 20:12:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:25.417 20:12:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:25.417 20:12:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:25.417 20:12:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:25.417 20:12:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:25.417 20:12:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:25.417 20:12:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:25.417 20:12:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:25.417 20:12:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:25.417 20:12:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:25.417 20:12:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:25.417 20:12:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:25.417 20:12:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:25.417 20:12:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:25.417 20:12:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:25.417 20:12:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:03:25.417 20:12:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:25.417 20:12:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:25.417 20:12:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:25.417 20:12:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:25.417 20:12:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:25.417 20:12:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:25.417 20:12:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:03:25.417 20:12:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:03:25.417 20:12:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:03:25.417 20:12:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:25.417 20:12:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:25.417 20:12:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:25.417 20:12:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:25.417 20:12:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:25.417 20:12:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:25.417 20:12:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:25.417 20:12:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:25.417 20:12:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:25.417 20:12:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:25.417 20:12:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:25.417 20:12:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:25.417 20:12:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:03:25.417 20:12:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:25.417 20:12:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:25.417 20:12:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:03:25.417 20:12:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:03:25.417 20:12:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:25.417 20:12:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:25.417 20:12:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:25.417 20:12:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:25.417 20:12:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:25.417 20:12:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:25.417 20:12:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:03:25.417 20:12:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:25.417 20:12:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:25.417 20:12:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:25.417 20:12:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:25.417 20:12:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:25.417 20:12:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:25.417 20:12:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:25.417 20:12:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:03:25.417 20:12:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:25.417 20:12:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:25.417 20:12:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:25.417 20:12:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:03:25.417 20:12:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:03:25.417 20:12:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:03:25.417 20:12:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:03:25.417 20:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:25.417 20:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:03:28.713 0000:5f:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:28.713 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:28.713 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:28.713 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:28.713 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:28.713 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:28.713 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:28.713 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:28.713 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:28.713 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:28.713 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:28.713 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:28.713 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:28.713 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:28.713 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:28.713 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:28.713 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:28.713 20:12:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:03:28.713 20:12:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:03:28.713 20:12:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:03:28.713 20:12:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:28.713 20:12:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:28.713 20:12:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:28.713 20:12:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:28.713 20:12:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:28.713 20:12:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:28.713 20:12:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:28.713 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:28.713 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:28.713 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:28.713 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:28.713 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:28.713 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:28.713 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:28.713 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:28.713 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:28.713 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.713 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.713 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381176 kB' 'MemFree: 167445088 kB' 'MemAvailable: 170708016 kB' 'Buffers: 4824 kB' 'Cached: 16658492 kB' 'SwapCached: 0 kB' 'Active: 13781252 kB' 'Inactive: 3533972 kB' 'Active(anon): 13054996 kB' 'Inactive(anon): 0 kB' 'Active(file): 726256 kB' 'Inactive(file): 3533972 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 654648 kB' 'Mapped: 208024 kB' 'Shmem: 12403088 kB' 'KReclaimable: 293832 kB' 'Slab: 927512 kB' 'SReclaimable: 293832 kB' 'SUnreclaim: 633680 kB' 'KernelStack: 20640 kB' 'PageTables: 9124 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 102506328 kB' 'Committed_AS: 14498768 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 318356 kB' 'VmallocChunk: 0 kB' 'Percpu: 87168 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3814356 kB' 'DirectMap2M: 52488192 kB' 'DirectMap1G: 145752064 kB' 00:03:28.713 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.713 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.713 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.713 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.713 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.713 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.713 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.713 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.713 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.713 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.713 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.713 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.713 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.713 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.713 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.713 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.713 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.713 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.714 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.714 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.714 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.714 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.714 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.714 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.714 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.714 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.714 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.714 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.714 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.714 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.714 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.714 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.714 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.714 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.714 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.714 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.714 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.714 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.714 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.714 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.714 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.714 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.714 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.714 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.714 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.714 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.714 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.714 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.714 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.714 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.714 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.714 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.714 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.714 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.714 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.714 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.714 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.714 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.714 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.714 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.714 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.714 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.714 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.714 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.714 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.714 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.714 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.714 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.714 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.714 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.714 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.714 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.714 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.714 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.714 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.714 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.714 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.714 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.714 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.714 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.714 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.714 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.714 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.714 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.714 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.714 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.714 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.714 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.714 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.714 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.714 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.714 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.714 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.714 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.714 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.714 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.714 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.714 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.714 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.714 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.714 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.714 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.714 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.714 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.714 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.714 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.714 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.714 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.714 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.714 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.714 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.714 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.714 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.714 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.714 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.714 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.714 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.714 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.714 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.714 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.714 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.714 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.714 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.714 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.714 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.714 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.714 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.714 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.714 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.714 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.714 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.714 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.714 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.714 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.714 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.714 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.714 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.714 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.714 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.714 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.714 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.714 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.714 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.714 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.715 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.715 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.715 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.715 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.715 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.715 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.715 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.715 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.715 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.715 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.715 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.715 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.715 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.715 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.715 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.715 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.715 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.715 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:28.715 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:28.715 20:12:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:28.715 20:12:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:28.715 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:28.715 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:28.715 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:28.715 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:28.715 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:28.715 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:28.715 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:28.715 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:28.715 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:28.715 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.715 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.715 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381176 kB' 'MemFree: 167446348 kB' 'MemAvailable: 170709276 kB' 'Buffers: 4824 kB' 'Cached: 16658496 kB' 'SwapCached: 0 kB' 'Active: 13780884 kB' 'Inactive: 3533972 kB' 'Active(anon): 13054628 kB' 'Inactive(anon): 0 kB' 'Active(file): 726256 kB' 'Inactive(file): 3533972 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 654792 kB' 'Mapped: 207948 kB' 'Shmem: 12403092 kB' 'KReclaimable: 293832 kB' 'Slab: 927500 kB' 'SReclaimable: 293832 kB' 'SUnreclaim: 633668 kB' 'KernelStack: 20608 kB' 'PageTables: 8940 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 102506328 kB' 'Committed_AS: 14498916 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 318292 kB' 'VmallocChunk: 0 kB' 'Percpu: 87168 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3814356 kB' 'DirectMap2M: 52488192 kB' 'DirectMap1G: 145752064 kB' 00:03:28.715 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.715 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.715 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.715 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.715 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.715 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.715 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.715 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.715 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.715 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.715 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.715 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.715 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.715 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.715 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.715 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.715 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.715 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.715 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.715 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.715 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.715 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.715 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.715 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.715 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.715 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.715 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.715 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.715 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.715 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.715 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.715 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.715 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.715 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.715 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.715 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.715 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.715 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.715 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.715 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.715 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.715 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.715 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.715 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.715 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.715 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.715 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.715 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.715 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.715 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.715 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.715 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.715 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.715 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.715 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.715 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.715 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.715 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.715 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.715 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.715 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.715 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.715 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.715 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.715 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.715 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.715 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.715 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.715 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.715 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.715 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.715 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.715 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.715 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.715 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.715 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.715 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.715 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.715 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.715 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.715 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.716 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.716 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.716 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.716 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.716 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.716 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.716 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.716 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.716 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.716 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.716 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.716 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.716 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.716 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.716 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.716 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.716 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.716 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.716 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.716 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.716 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.716 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.716 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.716 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.716 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.716 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.716 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.716 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.716 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.716 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.716 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.716 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.716 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.716 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.716 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.716 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.716 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.716 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.716 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.716 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.716 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.716 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.716 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.716 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.716 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.716 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.716 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.716 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.716 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.716 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.716 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.716 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.716 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.716 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.716 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.716 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.716 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.716 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.716 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.716 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.716 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.716 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.716 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.716 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.716 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.716 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.716 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.716 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.716 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.716 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.716 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.716 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.716 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.716 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.716 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.716 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.716 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.716 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.716 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.716 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.716 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.716 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.716 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.716 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.716 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.716 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.716 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.716 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.716 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.716 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.716 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.716 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.716 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.716 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.716 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.716 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.716 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.716 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.716 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.716 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.716 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.716 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.716 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.716 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.716 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.716 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.716 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.716 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.716 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.716 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.716 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.716 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.716 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.716 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.716 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.716 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.716 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.716 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.716 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.716 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.716 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.716 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.716 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.716 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.716 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:28.716 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:28.716 20:12:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:28.717 20:12:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:28.717 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:28.717 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:28.717 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:28.717 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:28.717 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:28.717 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:28.717 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:28.717 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:28.717 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:28.717 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.717 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.717 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381176 kB' 'MemFree: 167446604 kB' 'MemAvailable: 170709532 kB' 'Buffers: 4824 kB' 'Cached: 16658512 kB' 'SwapCached: 0 kB' 'Active: 13780908 kB' 'Inactive: 3533972 kB' 'Active(anon): 13054652 kB' 'Inactive(anon): 0 kB' 'Active(file): 726256 kB' 'Inactive(file): 3533972 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 654792 kB' 'Mapped: 207948 kB' 'Shmem: 12403108 kB' 'KReclaimable: 293832 kB' 'Slab: 927492 kB' 'SReclaimable: 293832 kB' 'SUnreclaim: 633660 kB' 'KernelStack: 20608 kB' 'PageTables: 8940 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 102506328 kB' 'Committed_AS: 14498940 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 318292 kB' 'VmallocChunk: 0 kB' 'Percpu: 87168 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3814356 kB' 'DirectMap2M: 52488192 kB' 'DirectMap1G: 145752064 kB' 00:03:28.717 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.717 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.717 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.717 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.717 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.717 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.717 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.717 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.717 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.717 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.717 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.717 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.717 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.717 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.717 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.717 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.717 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.717 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.717 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.717 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.717 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.717 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.717 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.717 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.717 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.717 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.717 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.717 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.717 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.717 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.717 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.717 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.717 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.717 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.717 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.717 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.717 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.717 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.717 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.717 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.717 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.717 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.717 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.717 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.717 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.717 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.717 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.717 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.717 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.717 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.717 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.717 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.717 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.717 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.717 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.717 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.717 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.717 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.717 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.717 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.717 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.717 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.717 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.717 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.717 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.717 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.717 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.717 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.717 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.717 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.717 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.717 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.717 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.717 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.717 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.717 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.717 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.717 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.717 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.717 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.718 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.718 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.718 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.718 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.718 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.718 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.718 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.718 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.718 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.718 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.718 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.718 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.718 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.718 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.718 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.718 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.718 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.718 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.718 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.718 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.718 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.718 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.718 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.718 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.718 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.718 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.718 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.718 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.718 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.718 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.718 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.718 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.718 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.718 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.718 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.718 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.718 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.718 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.718 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.718 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.718 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.718 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.718 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.718 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.718 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.718 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.718 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.718 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.718 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.718 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.718 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.718 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.718 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.718 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.718 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.718 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.718 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.718 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.718 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.718 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.718 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.718 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.718 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.718 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.718 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.718 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.718 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.718 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.718 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.718 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.718 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.718 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.718 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.718 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.718 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.718 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.718 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.718 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.718 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.718 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.718 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.718 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.718 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.718 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.718 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.718 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.718 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.718 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.718 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.718 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.718 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.718 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.718 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.718 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.718 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.718 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.718 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.718 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.718 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.718 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.718 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.718 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.718 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.718 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.718 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.718 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.718 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.718 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.718 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.718 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.718 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.718 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.718 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.718 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.718 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.718 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.718 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.718 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.718 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.718 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.718 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.718 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:28.718 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:28.718 20:12:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:28.719 20:12:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:03:28.719 nr_hugepages=1536 00:03:28.719 20:12:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:28.719 resv_hugepages=0 00:03:28.719 20:12:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:28.719 surplus_hugepages=0 00:03:28.719 20:12:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:28.719 anon_hugepages=0 00:03:28.719 20:12:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:28.719 20:12:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:03:28.719 20:12:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:28.719 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:28.719 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:28.719 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:28.719 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:28.719 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:28.719 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:28.719 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:28.719 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:28.719 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:28.719 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.719 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.719 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381176 kB' 'MemFree: 167445848 kB' 'MemAvailable: 170708776 kB' 'Buffers: 4824 kB' 'Cached: 16658512 kB' 'SwapCached: 0 kB' 'Active: 13780940 kB' 'Inactive: 3533972 kB' 'Active(anon): 13054684 kB' 'Inactive(anon): 0 kB' 'Active(file): 726256 kB' 'Inactive(file): 3533972 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 654824 kB' 'Mapped: 207948 kB' 'Shmem: 12403108 kB' 'KReclaimable: 293832 kB' 'Slab: 927492 kB' 'SReclaimable: 293832 kB' 'SUnreclaim: 633660 kB' 'KernelStack: 20624 kB' 'PageTables: 8992 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 102506328 kB' 'Committed_AS: 14498964 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 318292 kB' 'VmallocChunk: 0 kB' 'Percpu: 87168 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3814356 kB' 'DirectMap2M: 52488192 kB' 'DirectMap1G: 145752064 kB' 00:03:28.719 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.719 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.719 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.719 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.719 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.719 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.719 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.719 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.719 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.719 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.719 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.719 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.719 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.719 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.719 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.719 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.719 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.719 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.719 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.719 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.719 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.719 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.719 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.719 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.719 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.719 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.719 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.719 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.719 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.719 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.719 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.719 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.719 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.719 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.719 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.719 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.719 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.719 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.719 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.719 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.719 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.719 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.719 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.719 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.719 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.719 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.719 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.719 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.719 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.719 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.719 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.719 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.719 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.719 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.719 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.719 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.719 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.719 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.719 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.719 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.719 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.719 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.719 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.719 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.719 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.719 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.719 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.719 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.719 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.719 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.719 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.719 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.719 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.719 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.719 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.719 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.719 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.719 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.719 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.719 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.719 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.719 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.719 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.719 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.719 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.719 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.719 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.719 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.719 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.719 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.719 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.719 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.719 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.720 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.720 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.720 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.720 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.720 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.720 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.720 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.720 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.720 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.720 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.720 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.720 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.720 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.720 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.720 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.720 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.720 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.720 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.720 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.720 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.720 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.720 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.720 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.720 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.720 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.720 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.720 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.720 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.720 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.720 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.720 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.720 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.720 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.720 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.720 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.720 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.720 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.720 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.720 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.720 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.720 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.720 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.720 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.720 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.720 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.720 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.720 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.720 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.720 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.720 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.720 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.720 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.720 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.720 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.720 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.720 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.720 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.720 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.720 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.720 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.720 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.720 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.720 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.720 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.720 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.720 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.720 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.720 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.720 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.720 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.720 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.720 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.720 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.720 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.720 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.720 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.720 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.720 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.720 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.720 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.720 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.720 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.720 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.720 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.720 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.720 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.720 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.720 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.720 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.720 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.720 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.720 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.720 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.720 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.720 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.720 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.720 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.720 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.720 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.720 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.720 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:03:28.720 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:28.720 20:12:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:28.720 20:12:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:28.720 20:12:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:03:28.720 20:12:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:28.720 20:12:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:28.720 20:12:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:28.720 20:12:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:28.720 20:12:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:28.720 20:12:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:28.720 20:12:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:28.720 20:12:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:28.720 20:12:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:28.720 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:28.720 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:03:28.720 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:28.720 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:28.720 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:28.720 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:28.720 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:28.720 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:28.720 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:28.720 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.720 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.721 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97615628 kB' 'MemFree: 86399884 kB' 'MemUsed: 11215744 kB' 'SwapCached: 0 kB' 'Active: 8357000 kB' 'Inactive: 266804 kB' 'Active(anon): 7922400 kB' 'Inactive(anon): 0 kB' 'Active(file): 434600 kB' 'Inactive(file): 266804 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8205528 kB' 'Mapped: 100480 kB' 'AnonPages: 421452 kB' 'Shmem: 7504124 kB' 'KernelStack: 12472 kB' 'PageTables: 5476 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 134748 kB' 'Slab: 433532 kB' 'SReclaimable: 134748 kB' 'SUnreclaim: 298784 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:28.721 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.721 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.721 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.721 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.721 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.721 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.721 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.721 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.721 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.721 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.721 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.721 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.721 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.721 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.721 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.721 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.721 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.721 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.721 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.721 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.721 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.721 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.721 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.721 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.721 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.721 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.721 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.721 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.721 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.721 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.721 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.721 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.721 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.721 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.721 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.721 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.721 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.721 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.721 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.721 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.721 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.721 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.721 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.721 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.721 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.721 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.721 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.721 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.721 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.721 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.721 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.721 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.721 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.721 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.721 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.721 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.721 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.721 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.721 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.721 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.721 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.721 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.721 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.721 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.721 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.721 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.721 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.721 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.721 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.721 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.721 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.721 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.721 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.721 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.721 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.721 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.721 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.721 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.721 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.721 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.721 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.721 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.721 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.721 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.721 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.721 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.721 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.721 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.721 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.721 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.721 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.721 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.721 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.721 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.721 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.721 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.721 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.721 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.721 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.721 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.721 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.721 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.721 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.721 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.721 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.721 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.721 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.721 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.721 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.721 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.721 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.721 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.721 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.721 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.721 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.721 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.721 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.721 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.721 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.722 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.722 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.722 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.722 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.722 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.722 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.722 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.722 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.722 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.722 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.722 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.722 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.722 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.722 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.722 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.722 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.722 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.722 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.722 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.722 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.722 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.722 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.722 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.722 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.722 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.722 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.722 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:28.722 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:28.722 20:12:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:28.722 20:12:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:28.722 20:12:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:28.722 20:12:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:28.722 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:28.722 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:03:28.722 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:28.722 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:28.722 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:28.722 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:28.722 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:28.722 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:28.722 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:28.722 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.722 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93765548 kB' 'MemFree: 81046532 kB' 'MemUsed: 12719016 kB' 'SwapCached: 0 kB' 'Active: 5424520 kB' 'Inactive: 3267168 kB' 'Active(anon): 5132864 kB' 'Inactive(anon): 0 kB' 'Active(file): 291656 kB' 'Inactive(file): 3267168 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8457884 kB' 'Mapped: 107476 kB' 'AnonPages: 233992 kB' 'Shmem: 4899060 kB' 'KernelStack: 8184 kB' 'PageTables: 3704 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 159084 kB' 'Slab: 493960 kB' 'SReclaimable: 159084 kB' 'SUnreclaim: 334876 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:28.722 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.722 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.722 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.722 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.722 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.722 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.722 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.722 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.722 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.722 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.722 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.722 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.722 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.722 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.722 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.722 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.722 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.722 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.722 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.722 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.722 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.722 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.722 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.722 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.722 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.722 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.722 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.722 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.722 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.722 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.722 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.722 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.722 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.722 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.722 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.722 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.722 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.722 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.722 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.722 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.722 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.722 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.722 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.722 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.722 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.722 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.722 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.722 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.722 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.722 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.722 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.722 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.722 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.722 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.722 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.722 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.722 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.723 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.723 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.723 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.723 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.723 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.723 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.723 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.723 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.723 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.723 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.723 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.723 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.723 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.723 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.723 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.723 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.723 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.723 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.723 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.723 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.723 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.723 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.723 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.723 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.723 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.723 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.723 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.723 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.723 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.723 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.723 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.723 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.723 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.723 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.723 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.723 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.723 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.723 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.723 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.723 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.723 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.723 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.723 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.723 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.723 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.723 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.723 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.723 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.723 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.723 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.723 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.723 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.723 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.723 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.723 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.723 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.723 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.723 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.723 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.723 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.723 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.723 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.723 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.723 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.723 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.723 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.723 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.723 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.723 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.723 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.723 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.723 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.723 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.723 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.723 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.723 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.723 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.723 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.723 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.723 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.723 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.723 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.723 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.723 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.723 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.723 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.723 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.723 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.723 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.723 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:28.723 20:12:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:28.723 20:12:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:28.723 20:12:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:28.723 20:12:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:28.723 20:12:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:28.723 20:12:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:28.723 node0=512 expecting 512 00:03:28.723 20:12:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:28.723 20:12:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:28.723 20:12:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:28.723 20:12:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:03:28.723 node1=1024 expecting 1024 00:03:28.723 20:12:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:03:28.723 00:03:28.723 real 0m3.147s 00:03:28.723 user 0m1.226s 00:03:28.723 sys 0m1.949s 00:03:28.723 20:12:41 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:28.723 20:12:41 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:28.723 ************************************ 00:03:28.723 END TEST custom_alloc 00:03:28.723 ************************************ 00:03:28.723 20:12:41 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:03:28.723 20:12:41 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:28.723 20:12:41 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:28.723 20:12:41 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:28.723 ************************************ 00:03:28.723 START TEST no_shrink_alloc 00:03:28.723 ************************************ 00:03:28.723 20:12:41 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1121 -- # no_shrink_alloc 00:03:28.723 20:12:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:03:28.723 20:12:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:28.723 20:12:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:28.723 20:12:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:03:28.723 20:12:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:28.723 20:12:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:28.723 20:12:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:28.723 20:12:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:28.723 20:12:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:28.723 20:12:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:28.723 20:12:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:28.723 20:12:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:28.723 20:12:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:28.724 20:12:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:28.724 20:12:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:28.724 20:12:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:28.724 20:12:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:28.724 20:12:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:28.724 20:12:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:28.724 20:12:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:03:28.724 20:12:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:28.724 20:12:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:03:32.013 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:32.014 0000:5f:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:32.014 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:32.014 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:32.014 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:32.014 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:32.014 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:32.014 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:32.014 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:32.014 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:32.014 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:32.014 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:32.014 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:32.014 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:32.014 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:32.014 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:32.014 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:32.014 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:03:32.014 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:03:32.014 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:32.014 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:32.014 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:32.014 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:32.014 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:32.014 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:32.014 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:32.014 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:32.014 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:32.014 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:32.014 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:32.014 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:32.014 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:32.014 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:32.014 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:32.014 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:32.014 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.014 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.014 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381176 kB' 'MemFree: 168509204 kB' 'MemAvailable: 171772132 kB' 'Buffers: 4824 kB' 'Cached: 16658644 kB' 'SwapCached: 0 kB' 'Active: 13784524 kB' 'Inactive: 3533972 kB' 'Active(anon): 13058268 kB' 'Inactive(anon): 0 kB' 'Active(file): 726256 kB' 'Inactive(file): 3533972 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 658136 kB' 'Mapped: 207672 kB' 'Shmem: 12403240 kB' 'KReclaimable: 293832 kB' 'Slab: 927324 kB' 'SReclaimable: 293832 kB' 'SUnreclaim: 633492 kB' 'KernelStack: 21344 kB' 'PageTables: 11700 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030616 kB' 'Committed_AS: 14502412 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 318628 kB' 'VmallocChunk: 0 kB' 'Percpu: 87168 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3814356 kB' 'DirectMap2M: 52488192 kB' 'DirectMap1G: 145752064 kB' 00:03:32.014 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.014 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.014 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.014 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.014 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.014 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.014 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.014 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.014 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.014 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.014 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.014 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.014 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.014 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.014 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.014 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.014 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.014 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.014 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.014 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.014 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.014 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.014 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.014 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.014 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.014 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.014 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.014 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.014 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.014 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.014 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.014 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.014 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.014 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.014 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.014 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.014 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.014 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.014 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.014 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.014 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.014 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.014 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.014 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.014 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.014 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.014 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.014 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.014 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.014 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.014 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.014 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.014 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.014 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.014 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.014 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.014 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.014 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.014 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.014 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.015 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.015 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.015 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.015 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.015 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.015 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.015 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.015 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.015 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.015 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.015 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.015 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.015 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.015 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.015 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.015 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.015 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.015 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.015 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.015 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.015 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.015 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.015 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.015 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.015 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.015 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.015 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.015 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.015 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.015 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.015 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.015 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.015 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.015 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.015 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.015 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.015 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.015 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.015 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.015 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.015 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.015 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.015 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.015 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.015 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.015 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.015 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.015 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.015 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.015 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.015 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.015 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.015 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.015 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.015 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.015 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.015 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.015 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.015 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.015 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.015 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.015 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.015 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.015 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.015 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.015 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.015 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.015 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.015 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.015 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.015 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.015 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.015 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.015 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.015 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.015 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.015 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.015 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.015 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.015 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.015 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.015 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.015 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.015 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.015 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.015 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.015 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.015 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.015 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.015 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.015 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.015 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.015 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.015 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.015 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.016 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.016 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.016 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.016 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.016 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.016 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.016 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:32.016 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:32.016 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:32.016 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:32.016 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:32.016 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:32.016 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:32.016 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:32.016 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:32.016 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:32.016 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:32.016 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:32.016 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:32.016 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.016 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.016 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381176 kB' 'MemFree: 168510140 kB' 'MemAvailable: 171773068 kB' 'Buffers: 4824 kB' 'Cached: 16658648 kB' 'SwapCached: 0 kB' 'Active: 13782088 kB' 'Inactive: 3533972 kB' 'Active(anon): 13055832 kB' 'Inactive(anon): 0 kB' 'Active(file): 726256 kB' 'Inactive(file): 3533972 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 655888 kB' 'Mapped: 207968 kB' 'Shmem: 12403244 kB' 'KReclaimable: 293832 kB' 'Slab: 927444 kB' 'SReclaimable: 293832 kB' 'SUnreclaim: 633612 kB' 'KernelStack: 20848 kB' 'PageTables: 9612 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030616 kB' 'Committed_AS: 14499984 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 318436 kB' 'VmallocChunk: 0 kB' 'Percpu: 87168 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3814356 kB' 'DirectMap2M: 52488192 kB' 'DirectMap1G: 145752064 kB' 00:03:32.016 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.016 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.016 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.016 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.016 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.016 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.016 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.016 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.016 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.016 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.016 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.016 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.016 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.016 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.016 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.016 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.016 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.016 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.016 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.016 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.016 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.016 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.016 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.016 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.016 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.016 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.016 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.016 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.016 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.016 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.016 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.016 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.016 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.016 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.016 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.016 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.016 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.016 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.016 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.016 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.016 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.016 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.016 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.016 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.016 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.016 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.016 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.017 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.017 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.017 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.017 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.017 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.017 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.017 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.017 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.017 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.017 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.017 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.017 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.017 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.017 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.017 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.017 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.017 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.017 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.017 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.017 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.017 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.017 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.017 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.017 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.017 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.017 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.017 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.017 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.017 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.017 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.017 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.017 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.017 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.017 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.017 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.017 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.017 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.017 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.017 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.017 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.017 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.017 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.017 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.017 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.017 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.017 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.017 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.017 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.017 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.017 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.017 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.017 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.017 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.017 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.017 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.017 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.017 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.017 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.017 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.017 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.017 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.017 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.017 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.017 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.017 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.017 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.017 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.017 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.017 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.017 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.017 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.017 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.017 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.017 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.017 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.017 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.017 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.017 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.017 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.017 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.017 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.017 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.018 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.018 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.018 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.018 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.018 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.018 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.018 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.018 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.018 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.018 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.018 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.018 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.018 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.018 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.018 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.018 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.018 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.018 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.018 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.018 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.018 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.018 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.018 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.018 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.018 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.018 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.018 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.018 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.018 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.018 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.018 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.018 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.018 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.018 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.018 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.018 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.018 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.018 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.018 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.018 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.018 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.018 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.018 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.018 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.018 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.018 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.018 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.018 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.018 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.018 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.018 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.018 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.018 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.018 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.018 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.018 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.018 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.018 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.018 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.018 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.018 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.018 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.018 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.018 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.018 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.018 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.018 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.018 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.018 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.018 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.018 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.018 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.018 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.018 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.018 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.018 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.018 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:32.018 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:32.018 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:32.018 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:32.018 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:32.018 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:32.018 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:32.018 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:32.019 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:32.019 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:32.019 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:32.019 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:32.019 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:32.019 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.019 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.019 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381176 kB' 'MemFree: 168509872 kB' 'MemAvailable: 171772800 kB' 'Buffers: 4824 kB' 'Cached: 16658664 kB' 'SwapCached: 0 kB' 'Active: 13781672 kB' 'Inactive: 3533972 kB' 'Active(anon): 13055416 kB' 'Inactive(anon): 0 kB' 'Active(file): 726256 kB' 'Inactive(file): 3533972 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 655484 kB' 'Mapped: 207964 kB' 'Shmem: 12403260 kB' 'KReclaimable: 293832 kB' 'Slab: 927444 kB' 'SReclaimable: 293832 kB' 'SUnreclaim: 633612 kB' 'KernelStack: 20656 kB' 'PageTables: 9120 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030616 kB' 'Committed_AS: 14500004 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 318452 kB' 'VmallocChunk: 0 kB' 'Percpu: 87168 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3814356 kB' 'DirectMap2M: 52488192 kB' 'DirectMap1G: 145752064 kB' 00:03:32.019 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.019 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.019 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.019 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.019 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.019 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.019 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.019 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.019 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.019 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.019 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.019 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.019 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.019 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.019 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.019 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.019 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.019 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.019 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.019 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.019 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.019 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.019 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.019 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.019 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.019 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.019 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.019 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.019 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.019 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.019 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.019 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.019 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.019 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.019 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.019 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.019 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.019 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.019 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.019 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.019 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.019 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.019 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.019 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.019 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.019 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.019 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.019 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.019 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.019 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.019 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.019 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.019 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.019 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.019 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.019 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.019 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.019 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.019 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.019 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.019 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.020 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.020 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.020 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.020 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.020 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.020 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.020 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.020 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.020 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.020 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.020 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.020 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.020 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.020 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.020 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.020 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.020 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.020 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.020 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.020 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.020 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.020 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.020 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.020 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.020 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.020 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.020 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.020 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.020 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.020 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.020 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.020 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.020 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.020 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.020 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.020 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.020 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.020 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.020 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.020 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.020 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.020 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.020 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.020 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.020 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.020 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.020 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.020 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.020 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.020 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.021 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.021 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.021 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.021 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.021 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.021 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.021 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.021 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.021 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.021 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.021 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.021 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.021 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.021 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.021 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.021 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.021 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.021 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.021 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.021 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.021 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.021 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.021 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.021 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.021 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.021 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.021 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.021 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.021 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.021 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.021 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.021 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.021 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.021 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.021 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.021 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.021 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.021 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.021 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.021 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.021 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.021 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.021 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.021 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.021 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.021 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.021 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.021 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.021 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.021 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.021 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.021 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.021 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.021 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.021 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.021 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.021 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.021 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.021 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.021 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.021 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.021 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.021 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.021 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.021 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.021 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.021 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.021 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.021 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.021 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.021 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.021 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.021 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.021 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.021 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.021 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.021 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.021 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.021 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.022 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.022 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.022 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.022 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.022 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.022 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.022 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.022 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.022 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.022 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.022 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.022 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:32.022 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:32.022 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:32.022 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:32.022 nr_hugepages=1024 00:03:32.022 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:32.022 resv_hugepages=0 00:03:32.022 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:32.022 surplus_hugepages=0 00:03:32.022 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:32.022 anon_hugepages=0 00:03:32.022 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:32.022 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:32.022 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:32.022 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:32.022 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:32.022 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:32.022 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:32.022 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:32.022 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:32.022 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:32.022 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:32.022 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:32.022 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.022 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.022 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381176 kB' 'MemFree: 168509972 kB' 'MemAvailable: 171772900 kB' 'Buffers: 4824 kB' 'Cached: 16658704 kB' 'SwapCached: 0 kB' 'Active: 13781332 kB' 'Inactive: 3533972 kB' 'Active(anon): 13055076 kB' 'Inactive(anon): 0 kB' 'Active(file): 726256 kB' 'Inactive(file): 3533972 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 655092 kB' 'Mapped: 207964 kB' 'Shmem: 12403300 kB' 'KReclaimable: 293832 kB' 'Slab: 927444 kB' 'SReclaimable: 293832 kB' 'SUnreclaim: 633612 kB' 'KernelStack: 20640 kB' 'PageTables: 9064 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030616 kB' 'Committed_AS: 14500028 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 318452 kB' 'VmallocChunk: 0 kB' 'Percpu: 87168 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3814356 kB' 'DirectMap2M: 52488192 kB' 'DirectMap1G: 145752064 kB' 00:03:32.022 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.022 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.022 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.022 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.022 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.022 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.022 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.022 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.022 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.022 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.022 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.022 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.022 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.022 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.022 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.022 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.022 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.022 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.022 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.022 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.022 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.022 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.022 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.022 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.022 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.022 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.022 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.022 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.022 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.023 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.023 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.023 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.023 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.023 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.023 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.023 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.023 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.023 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.023 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.023 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.023 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.023 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.023 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.023 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.023 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.023 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.023 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.023 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.023 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.023 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.023 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.023 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.023 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.023 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.023 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.023 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.023 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.023 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.023 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.023 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.023 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.023 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.023 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.023 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.023 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.023 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.023 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.023 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.023 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.023 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.023 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.023 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.023 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.023 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.023 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.023 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.023 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.023 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.023 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.023 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.023 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.023 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.023 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.023 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.023 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.023 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.023 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.023 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.023 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.023 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.023 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.023 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.023 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.023 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.023 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.023 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.023 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.023 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.023 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.023 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.023 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.023 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.023 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.023 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.023 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.023 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.023 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.023 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.023 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.023 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.023 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.023 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.023 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.023 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.024 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.024 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.024 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.024 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.024 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.024 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.024 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.024 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.024 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.024 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.024 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.024 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.024 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.024 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.024 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.024 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.024 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.024 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.024 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.024 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.024 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.024 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.024 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.024 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.024 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.024 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.024 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.024 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.024 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.024 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.024 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.024 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.024 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.024 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.024 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.024 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.024 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.024 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.024 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.024 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.024 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.024 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.024 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.024 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.024 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.024 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.024 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.024 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.024 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.024 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.024 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.024 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.024 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.024 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.024 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.024 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.024 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.024 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.024 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.024 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.024 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.024 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.024 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.024 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.024 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.024 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.024 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.024 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.024 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.024 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.024 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.024 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.024 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.024 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.024 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.024 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.024 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.024 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.024 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.025 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:03:32.025 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:32.025 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:32.025 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:32.025 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:03:32.025 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:32.025 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:32.025 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:32.025 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:32.025 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:32.025 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:32.025 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:32.025 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:32.025 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:32.025 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:32.025 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:03:32.025 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:32.025 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:32.025 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:32.025 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:32.025 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:32.025 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:32.025 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:32.025 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.025 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.025 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97615628 kB' 'MemFree: 85355996 kB' 'MemUsed: 12259632 kB' 'SwapCached: 0 kB' 'Active: 8356848 kB' 'Inactive: 266804 kB' 'Active(anon): 7922248 kB' 'Inactive(anon): 0 kB' 'Active(file): 434600 kB' 'Inactive(file): 266804 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8205528 kB' 'Mapped: 100496 kB' 'AnonPages: 421360 kB' 'Shmem: 7504124 kB' 'KernelStack: 12488 kB' 'PageTables: 5468 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 134748 kB' 'Slab: 433680 kB' 'SReclaimable: 134748 kB' 'SUnreclaim: 298932 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:32.025 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.025 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.025 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.025 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.025 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.025 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.025 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.025 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.025 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.025 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.025 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.025 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.025 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.025 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.025 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.025 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.025 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.025 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.025 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.025 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.025 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.025 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.025 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.025 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.025 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.025 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.025 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.025 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.025 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.025 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.025 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.025 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.025 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.025 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.025 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.025 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.025 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.025 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.025 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.025 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.025 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.025 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.025 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.025 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.025 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.025 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.026 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.026 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.026 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.026 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.026 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.026 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.026 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.026 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.026 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.026 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.026 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.026 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.026 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.026 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.026 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.026 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.026 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.026 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.026 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.026 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.026 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.026 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.026 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.026 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.026 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.026 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.026 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.026 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.026 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.026 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.026 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.026 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.026 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.026 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.026 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.026 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.026 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.026 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.026 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.026 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.026 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.026 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.026 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.026 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.026 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.026 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.026 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.026 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.026 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.026 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.026 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.026 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.026 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.026 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.026 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.026 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.026 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.026 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.026 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.026 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.026 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.026 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.026 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.026 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.026 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.026 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.026 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.026 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.026 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.026 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.026 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.026 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.026 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.026 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.026 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.026 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.026 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.026 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.026 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.026 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.026 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.026 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.026 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.027 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.027 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.027 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.027 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.027 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.027 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.027 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.027 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.027 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.027 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.027 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.027 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.027 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.027 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.027 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.027 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.027 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:32.027 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:32.027 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:32.027 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:32.027 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:32.027 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:32.027 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:32.027 node0=1024 expecting 1024 00:03:32.027 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:32.027 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:03:32.027 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:03:32.027 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:03:32.027 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:32.027 20:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:03:34.562 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:34.562 0000:5f:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:34.562 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:34.563 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:34.563 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:34.563 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:34.563 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:34.563 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:34.563 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:34.563 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:34.563 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:34.563 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:34.563 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:34.563 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:34.563 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:34.563 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:34.563 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:34.563 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:03:34.826 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:03:34.826 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:03:34.826 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:34.826 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:34.826 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:34.826 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:34.826 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:34.826 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:34.826 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:34.826 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:34.826 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:34.826 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:34.826 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:34.826 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:34.826 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:34.826 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:34.826 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:34.826 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:34.826 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.826 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.826 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381176 kB' 'MemFree: 168490628 kB' 'MemAvailable: 171753556 kB' 'Buffers: 4824 kB' 'Cached: 16658788 kB' 'SwapCached: 0 kB' 'Active: 13783304 kB' 'Inactive: 3533972 kB' 'Active(anon): 13057048 kB' 'Inactive(anon): 0 kB' 'Active(file): 726256 kB' 'Inactive(file): 3533972 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 656576 kB' 'Mapped: 208484 kB' 'Shmem: 12403384 kB' 'KReclaimable: 293832 kB' 'Slab: 926940 kB' 'SReclaimable: 293832 kB' 'SUnreclaim: 633108 kB' 'KernelStack: 20688 kB' 'PageTables: 9284 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030616 kB' 'Committed_AS: 14500496 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 318388 kB' 'VmallocChunk: 0 kB' 'Percpu: 87168 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3814356 kB' 'DirectMap2M: 52488192 kB' 'DirectMap1G: 145752064 kB' 00:03:34.826 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.826 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.827 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.827 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.827 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.827 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.827 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.827 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.827 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.827 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.827 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.827 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.827 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.827 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.827 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.827 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.827 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.827 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.827 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.827 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.827 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.827 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.827 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.827 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.827 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.827 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.827 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.827 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.827 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.827 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.827 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.827 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.827 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.827 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.827 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.827 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.827 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.827 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.827 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.827 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.827 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.827 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.827 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.827 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.827 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.827 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.827 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.827 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.827 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.827 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.827 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.827 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.827 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.827 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.827 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.827 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.827 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.827 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.827 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.827 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.827 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.827 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.827 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.827 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.827 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.827 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.827 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.827 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.827 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.827 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.827 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.827 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.827 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.827 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.827 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.827 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.827 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.827 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.827 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.827 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.827 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.827 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.827 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.827 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.827 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.827 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.827 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.827 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.827 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.827 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.827 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.827 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.827 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.827 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.827 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.827 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.827 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.827 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.827 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.827 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.827 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.827 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.827 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.827 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.827 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.827 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.827 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.827 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.827 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.827 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.827 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.827 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.827 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.827 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.827 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.827 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.827 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.827 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.827 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.827 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.827 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.827 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.827 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.827 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.827 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.827 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.827 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.827 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.827 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.827 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.827 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.827 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.827 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.827 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.827 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.827 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.827 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.827 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.827 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.827 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.827 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.827 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.827 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.827 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.827 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.827 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.827 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.827 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.827 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.827 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.827 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.827 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.827 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.827 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.827 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.827 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.827 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.828 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.828 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.828 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.828 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.828 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:34.828 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:34.828 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:34.828 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:34.828 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:34.828 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:34.828 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:34.828 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:34.828 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:34.828 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:34.828 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:34.828 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:34.828 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:34.828 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.828 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.828 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381176 kB' 'MemFree: 168491084 kB' 'MemAvailable: 171754012 kB' 'Buffers: 4824 kB' 'Cached: 16658792 kB' 'SwapCached: 0 kB' 'Active: 13782240 kB' 'Inactive: 3533972 kB' 'Active(anon): 13055984 kB' 'Inactive(anon): 0 kB' 'Active(file): 726256 kB' 'Inactive(file): 3533972 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 655940 kB' 'Mapped: 207972 kB' 'Shmem: 12403388 kB' 'KReclaimable: 293832 kB' 'Slab: 926948 kB' 'SReclaimable: 293832 kB' 'SUnreclaim: 633116 kB' 'KernelStack: 20656 kB' 'PageTables: 9136 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030616 kB' 'Committed_AS: 14500512 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 318372 kB' 'VmallocChunk: 0 kB' 'Percpu: 87168 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3814356 kB' 'DirectMap2M: 52488192 kB' 'DirectMap1G: 145752064 kB' 00:03:34.828 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.828 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.828 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.828 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.828 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.828 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.828 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.828 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.828 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.828 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.828 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.828 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.828 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.828 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.828 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.828 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.828 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.828 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.828 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.828 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.828 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.828 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.828 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.828 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.828 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.828 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.828 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.828 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.828 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.828 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.828 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.828 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.828 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.828 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.828 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.828 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.828 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.828 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.828 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.828 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.828 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.828 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.828 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.828 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.828 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.828 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.828 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.828 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.828 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.828 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.828 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.828 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.828 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.828 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.828 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.828 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.828 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.828 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.828 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.828 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.828 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.828 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.828 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.828 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.828 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.828 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.828 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.828 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.828 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.828 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.828 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.828 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.828 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.828 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.828 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.828 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.828 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.828 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.828 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.828 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.828 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.828 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.828 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.828 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.828 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.828 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.828 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.828 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.828 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.828 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.828 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.828 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.828 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.828 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.828 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.828 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.828 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.828 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.828 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.828 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.828 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.828 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.828 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.828 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.828 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.828 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.828 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.828 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.828 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.828 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.828 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.828 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.828 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.828 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.828 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.828 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.828 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.828 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.828 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.828 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.828 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.828 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.828 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.828 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.829 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.829 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.829 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.829 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.829 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.829 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.829 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.829 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.829 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.829 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.829 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.829 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.829 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.829 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.829 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.829 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.829 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.829 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.829 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.829 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.829 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.829 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.829 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.829 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.829 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.829 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.829 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.829 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.829 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.829 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.829 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.829 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.829 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.829 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.829 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.829 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.829 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.829 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.829 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.829 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.829 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.829 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.829 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.829 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.829 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.829 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.829 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.829 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.829 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.829 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.829 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.829 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.829 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.829 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.829 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.829 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.829 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.829 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.829 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.829 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.829 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.829 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.829 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.829 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.829 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.829 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.829 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.829 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.829 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.829 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.829 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.829 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.829 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.829 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.829 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.829 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.829 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.829 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.829 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.829 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.829 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.829 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:34.829 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:34.829 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:34.829 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:34.829 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:34.829 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:34.829 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:34.829 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:34.829 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:34.829 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:34.829 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:34.829 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:34.829 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:34.829 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.829 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.829 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381176 kB' 'MemFree: 168492152 kB' 'MemAvailable: 171755080 kB' 'Buffers: 4824 kB' 'Cached: 16658812 kB' 'SwapCached: 0 kB' 'Active: 13782244 kB' 'Inactive: 3533972 kB' 'Active(anon): 13055988 kB' 'Inactive(anon): 0 kB' 'Active(file): 726256 kB' 'Inactive(file): 3533972 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 655936 kB' 'Mapped: 207972 kB' 'Shmem: 12403408 kB' 'KReclaimable: 293832 kB' 'Slab: 926948 kB' 'SReclaimable: 293832 kB' 'SUnreclaim: 633116 kB' 'KernelStack: 20656 kB' 'PageTables: 9136 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030616 kB' 'Committed_AS: 14500536 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 318372 kB' 'VmallocChunk: 0 kB' 'Percpu: 87168 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3814356 kB' 'DirectMap2M: 52488192 kB' 'DirectMap1G: 145752064 kB' 00:03:34.829 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.829 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.829 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.829 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.829 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.829 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.829 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.829 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.829 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.829 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.829 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.829 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.829 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.829 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.829 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.829 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.829 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.829 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.829 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.829 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.829 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.829 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.829 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.829 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.829 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.829 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.829 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.829 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.829 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.829 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.829 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.829 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.829 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.829 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.829 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.829 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.829 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.829 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.829 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.829 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.829 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.829 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.829 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.830 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.830 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.830 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.830 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.830 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.830 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.830 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.830 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.830 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.830 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.830 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.830 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.830 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.830 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.830 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.830 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.830 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.830 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.830 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.830 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.830 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.830 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.830 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.830 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.830 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.830 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.830 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.830 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.830 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.830 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.830 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.830 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.830 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.830 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.830 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.830 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.830 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.830 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.830 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.830 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.830 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.830 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.830 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.830 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.830 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.830 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.830 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.830 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.830 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.830 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.830 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.830 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.830 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.830 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.830 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.830 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.830 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.830 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.830 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.830 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.830 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.830 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.830 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.830 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.830 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.830 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.830 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.830 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.830 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.830 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.830 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.830 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.830 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.830 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.830 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.830 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.830 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.830 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.830 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.830 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.830 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.830 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.830 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.830 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.830 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.830 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.830 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.830 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.830 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.830 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.830 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.830 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.830 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.830 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.830 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.830 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.830 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.830 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.830 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.830 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.830 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.830 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.830 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.830 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.830 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.830 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.830 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.830 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.830 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.830 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.830 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.830 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.830 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.830 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.830 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.830 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.830 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.830 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.830 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.830 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.830 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.830 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.830 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.830 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.830 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.830 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.830 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.830 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.830 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.830 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.831 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.831 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.831 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.831 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.831 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.831 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.831 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.831 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.831 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.831 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.831 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.831 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.831 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.831 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.831 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.831 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.831 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.831 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.831 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.831 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.831 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.831 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.831 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.831 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.831 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.831 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.831 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.831 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.831 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:34.831 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:34.831 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:34.831 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:34.831 nr_hugepages=1024 00:03:34.831 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:34.831 resv_hugepages=0 00:03:34.831 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:34.831 surplus_hugepages=0 00:03:34.831 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:34.831 anon_hugepages=0 00:03:34.831 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:34.831 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:34.831 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:34.831 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:34.831 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:34.831 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:34.831 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:34.831 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:34.831 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:34.831 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:34.831 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:34.831 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:34.831 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.831 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.831 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381176 kB' 'MemFree: 168492452 kB' 'MemAvailable: 171755380 kB' 'Buffers: 4824 kB' 'Cached: 16658832 kB' 'SwapCached: 0 kB' 'Active: 13782248 kB' 'Inactive: 3533972 kB' 'Active(anon): 13055992 kB' 'Inactive(anon): 0 kB' 'Active(file): 726256 kB' 'Inactive(file): 3533972 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 655936 kB' 'Mapped: 207972 kB' 'Shmem: 12403428 kB' 'KReclaimable: 293832 kB' 'Slab: 926948 kB' 'SReclaimable: 293832 kB' 'SUnreclaim: 633116 kB' 'KernelStack: 20656 kB' 'PageTables: 9136 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030616 kB' 'Committed_AS: 14500560 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 318372 kB' 'VmallocChunk: 0 kB' 'Percpu: 87168 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3814356 kB' 'DirectMap2M: 52488192 kB' 'DirectMap1G: 145752064 kB' 00:03:34.831 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.831 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.831 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.831 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.831 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.831 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.831 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.831 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.831 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.831 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.831 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.831 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.831 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.831 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.831 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.831 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.831 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.831 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.831 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.831 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.831 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.831 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.831 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.831 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.831 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.831 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.831 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.831 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.831 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.831 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.831 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.831 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.831 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.831 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.831 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.831 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.831 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.831 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.831 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.831 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.831 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.831 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.831 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.831 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.831 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.831 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.831 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.831 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.831 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.831 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.831 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.831 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.831 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.831 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.831 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.831 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.831 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.831 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.831 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.831 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.831 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.831 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.831 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.831 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.831 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.831 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.831 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.831 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.831 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.831 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.831 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.831 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.831 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.831 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.831 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.831 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.831 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.831 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.831 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.831 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.831 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.831 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.831 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.831 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.831 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.831 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.831 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.831 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.831 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.831 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.831 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.832 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.832 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.832 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.832 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.832 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.832 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.832 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.832 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.832 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.832 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.832 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.832 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.832 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.832 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.832 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.832 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.832 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.832 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.832 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.832 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.832 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.832 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.832 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.832 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.832 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.832 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.832 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.832 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.832 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.832 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.832 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.832 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.832 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.832 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.832 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.832 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.832 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.832 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.832 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.832 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.832 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.832 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.832 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.832 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.832 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.832 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.832 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.832 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.832 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.832 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.832 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.832 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.832 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.832 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.832 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.832 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.832 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.832 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.832 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.832 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.832 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.832 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.832 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.832 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.832 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.832 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.832 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.832 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.832 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.832 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.832 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.832 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.832 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.832 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.832 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.832 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.832 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.832 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.832 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.832 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.832 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.832 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.832 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.832 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.832 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.832 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.832 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.832 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.832 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.832 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.832 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.832 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.832 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.832 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.832 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.832 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.832 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.832 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.832 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.832 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.832 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.832 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.832 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:03:34.832 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:34.832 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:34.832 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:34.832 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:03:34.832 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:34.832 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:34.832 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:34.832 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:34.832 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:34.832 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:34.832 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:34.832 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:34.832 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:34.832 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:34.832 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:03:34.832 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:34.832 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:34.832 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:34.832 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:34.832 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:34.832 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:34.832 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:34.832 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.832 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97615628 kB' 'MemFree: 85328628 kB' 'MemUsed: 12287000 kB' 'SwapCached: 0 kB' 'Active: 8356448 kB' 'Inactive: 266804 kB' 'Active(anon): 7921848 kB' 'Inactive(anon): 0 kB' 'Active(file): 434600 kB' 'Inactive(file): 266804 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8205536 kB' 'Mapped: 100504 kB' 'AnonPages: 420872 kB' 'Shmem: 7504132 kB' 'KernelStack: 12456 kB' 'PageTables: 5336 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 134748 kB' 'Slab: 433020 kB' 'SReclaimable: 134748 kB' 'SUnreclaim: 298272 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:34.832 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.832 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.832 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.832 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.832 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.832 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.832 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.832 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.832 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.832 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.832 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.832 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.832 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.832 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.832 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.833 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.833 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.833 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.833 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.833 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.833 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.833 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.833 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.833 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.833 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.833 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.833 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.833 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.833 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.833 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.833 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.833 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.833 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.833 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.833 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.833 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.833 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.833 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.833 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.833 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.833 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.833 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.833 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.833 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.833 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.833 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.833 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.833 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.833 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.833 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.833 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.833 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.833 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.833 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.833 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.833 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.833 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.833 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.833 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.833 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.833 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.833 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.833 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.833 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.833 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.833 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.833 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.833 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.833 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.833 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.833 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.833 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.833 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.833 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.833 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.833 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.833 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.833 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.833 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.833 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.833 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.833 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.833 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.833 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.833 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.833 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.833 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.833 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.833 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.833 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.833 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.833 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.833 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.833 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.833 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.833 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.833 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.833 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.833 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.833 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.833 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.833 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.833 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.833 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.833 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.833 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.833 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.833 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.833 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.833 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.833 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.833 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.833 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.833 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.833 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.833 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.833 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.833 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.833 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.833 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.833 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.833 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.833 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.833 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.833 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.833 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.833 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.833 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.833 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.833 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.833 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.833 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.833 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.833 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.833 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.833 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.833 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.833 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.833 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.833 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.833 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.833 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.833 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.833 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.833 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.833 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.833 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:34.833 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:34.833 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:34.833 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:34.833 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:34.833 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:34.833 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:34.834 node0=1024 expecting 1024 00:03:34.834 20:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:34.834 00:03:34.834 real 0m6.266s 00:03:34.834 user 0m2.447s 00:03:34.834 sys 0m3.931s 00:03:34.834 20:12:47 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:34.834 20:12:47 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:34.834 ************************************ 00:03:34.834 END TEST no_shrink_alloc 00:03:34.834 ************************************ 00:03:34.834 20:12:47 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:03:34.834 20:12:47 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:34.834 20:12:47 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:34.834 20:12:47 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:34.834 20:12:47 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:34.834 20:12:47 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:34.834 20:12:47 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:34.834 20:12:47 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:34.834 20:12:47 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:34.834 20:12:47 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:34.834 20:12:47 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:34.834 20:12:47 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:34.834 20:12:47 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:34.834 20:12:47 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:34.834 00:03:34.834 real 0m23.620s 00:03:34.834 user 0m8.611s 00:03:34.834 sys 0m13.875s 00:03:34.834 20:12:47 setup.sh.hugepages -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:34.834 20:12:47 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:34.834 ************************************ 00:03:34.834 END TEST hugepages 00:03:34.834 ************************************ 00:03:35.091 20:12:47 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/driver.sh 00:03:35.091 20:12:47 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:35.091 20:12:47 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:35.091 20:12:47 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:35.091 ************************************ 00:03:35.091 START TEST driver 00:03:35.091 ************************************ 00:03:35.091 20:12:47 setup.sh.driver -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/driver.sh 00:03:35.091 * Looking for test storage... 00:03:35.092 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup 00:03:35.092 20:12:47 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:03:35.092 20:12:47 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:35.092 20:12:47 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:03:39.280 20:12:51 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:03:39.280 20:12:51 setup.sh.driver -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:39.281 20:12:51 setup.sh.driver -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:39.281 20:12:51 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:03:39.281 ************************************ 00:03:39.281 START TEST guess_driver 00:03:39.281 ************************************ 00:03:39.281 20:12:51 setup.sh.driver.guess_driver -- common/autotest_common.sh@1121 -- # guess_driver 00:03:39.281 20:12:51 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:03:39.281 20:12:51 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:03:39.281 20:12:51 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:03:39.281 20:12:51 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:03:39.281 20:12:51 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:03:39.281 20:12:51 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:03:39.281 20:12:51 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:03:39.281 20:12:51 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:03:39.281 20:12:51 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:03:39.281 20:12:51 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 222 > 0 )) 00:03:39.281 20:12:51 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:03:39.281 20:12:51 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:03:39.281 20:12:51 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:03:39.281 20:12:51 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:03:39.281 20:12:51 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:03:39.281 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:39.281 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:39.281 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:39.281 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:39.281 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:03:39.281 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:03:39.281 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:03:39.281 20:12:51 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:03:39.281 20:12:51 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:03:39.281 20:12:51 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:03:39.281 20:12:51 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:03:39.281 20:12:51 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:03:39.281 Looking for driver=vfio-pci 00:03:39.281 20:12:51 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:39.281 20:12:51 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:03:39.281 20:12:51 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:03:39.281 20:12:51 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:03:42.567 20:12:54 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:42.567 20:12:54 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:42.567 20:12:54 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:42.567 20:12:54 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:42.567 20:12:54 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:42.567 20:12:54 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:42.567 20:12:54 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:42.567 20:12:54 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:42.567 20:12:54 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:42.567 20:12:54 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:42.567 20:12:54 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:42.567 20:12:54 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:42.567 20:12:54 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:42.567 20:12:54 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:42.567 20:12:54 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:42.567 20:12:54 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:42.567 20:12:54 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:42.567 20:12:54 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:42.567 20:12:54 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:42.567 20:12:54 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:42.567 20:12:54 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:42.567 20:12:54 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:42.567 20:12:54 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:42.567 20:12:54 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:42.567 20:12:54 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:42.567 20:12:54 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:42.567 20:12:54 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:42.567 20:12:54 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:42.567 20:12:54 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:42.567 20:12:54 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:42.567 20:12:54 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:42.567 20:12:54 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:42.567 20:12:54 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:42.567 20:12:54 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:42.567 20:12:54 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:42.567 20:12:54 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:42.567 20:12:54 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:42.567 20:12:54 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:42.567 20:12:54 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:42.567 20:12:54 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:42.567 20:12:54 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:42.567 20:12:54 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:42.567 20:12:54 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:42.567 20:12:54 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:42.567 20:12:54 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:42.567 20:12:54 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:42.567 20:12:54 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:42.567 20:12:54 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:43.522 20:12:56 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:43.522 20:12:56 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:43.522 20:12:56 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:43.522 20:12:56 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:03:43.522 20:12:56 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:03:43.522 20:12:56 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:43.522 20:12:56 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:03:48.792 00:03:48.792 real 0m8.989s 00:03:48.792 user 0m2.533s 00:03:48.792 sys 0m4.425s 00:03:48.792 20:13:00 setup.sh.driver.guess_driver -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:48.792 20:13:00 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:03:48.792 ************************************ 00:03:48.792 END TEST guess_driver 00:03:48.792 ************************************ 00:03:48.792 00:03:48.792 real 0m12.939s 00:03:48.792 user 0m3.542s 00:03:48.792 sys 0m6.438s 00:03:48.792 20:13:00 setup.sh.driver -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:48.792 20:13:00 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:03:48.792 ************************************ 00:03:48.792 END TEST driver 00:03:48.792 ************************************ 00:03:48.792 20:13:00 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/devices.sh 00:03:48.792 20:13:00 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:48.792 20:13:00 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:48.792 20:13:00 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:48.792 ************************************ 00:03:48.792 START TEST devices 00:03:48.792 ************************************ 00:03:48.792 20:13:00 setup.sh.devices -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/devices.sh 00:03:48.792 * Looking for test storage... 00:03:48.792 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup 00:03:48.792 20:13:00 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:03:48.792 20:13:00 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:03:48.792 20:13:00 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:48.792 20:13:00 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:03:51.326 20:13:04 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:03:51.326 20:13:04 setup.sh.devices -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:03:51.326 20:13:04 setup.sh.devices -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:03:51.326 20:13:04 setup.sh.devices -- common/autotest_common.sh@1666 -- # local nvme bdf 00:03:51.326 20:13:04 setup.sh.devices -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:03:51.326 20:13:04 setup.sh.devices -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n1 00:03:51.326 20:13:04 setup.sh.devices -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:03:51.326 20:13:04 setup.sh.devices -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:51.326 20:13:04 setup.sh.devices -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:03:51.326 20:13:04 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:03:51.326 20:13:04 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:03:51.326 20:13:04 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:03:51.326 20:13:04 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:03:51.326 20:13:04 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:03:51.326 20:13:04 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:51.326 20:13:04 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:03:51.326 20:13:04 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:03:51.326 20:13:04 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:5f:00.0 00:03:51.326 20:13:04 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\5\f\:\0\0\.\0* ]] 00:03:51.326 20:13:04 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:03:51.326 20:13:04 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:03:51.326 20:13:04 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:03:51.326 No valid GPT data, bailing 00:03:51.326 20:13:04 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:51.326 20:13:04 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:03:51.326 20:13:04 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:03:51.326 20:13:04 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:03:51.326 20:13:04 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:03:51.326 20:13:04 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:03:51.326 20:13:04 setup.sh.devices -- setup/common.sh@80 -- # echo 1600321314816 00:03:51.326 20:13:04 setup.sh.devices -- setup/devices.sh@204 -- # (( 1600321314816 >= min_disk_size )) 00:03:51.326 20:13:04 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:51.326 20:13:04 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:5f:00.0 00:03:51.326 20:13:04 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:03:51.326 20:13:04 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:03:51.326 20:13:04 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:03:51.326 20:13:04 setup.sh.devices -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:51.326 20:13:04 setup.sh.devices -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:51.327 20:13:04 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:03:51.585 ************************************ 00:03:51.585 START TEST nvme_mount 00:03:51.585 ************************************ 00:03:51.585 20:13:04 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1121 -- # nvme_mount 00:03:51.585 20:13:04 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:03:51.586 20:13:04 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:03:51.586 20:13:04 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:03:51.586 20:13:04 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:51.586 20:13:04 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:03:51.586 20:13:04 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:51.586 20:13:04 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:03:51.586 20:13:04 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:03:51.586 20:13:04 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:51.586 20:13:04 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:03:51.586 20:13:04 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:03:51.586 20:13:04 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:03:51.586 20:13:04 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:51.586 20:13:04 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:51.586 20:13:04 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:51.586 20:13:04 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:51.586 20:13:04 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:03:51.586 20:13:04 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:51.586 20:13:04 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:03:52.523 Creating new GPT entries in memory. 00:03:52.523 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:52.523 other utilities. 00:03:52.523 20:13:05 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:03:52.523 20:13:05 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:52.523 20:13:05 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:52.523 20:13:05 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:52.523 20:13:05 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:03:53.459 Creating new GPT entries in memory. 00:03:53.459 The operation has completed successfully. 00:03:53.459 20:13:06 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:03:53.459 20:13:06 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:53.459 20:13:06 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 2854042 00:03:53.459 20:13:06 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:03:53.459 20:13:06 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount size= 00:03:53.459 20:13:06 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:03:53.459 20:13:06 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:03:53.459 20:13:06 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:03:53.459 20:13:06 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:03:53.718 20:13:06 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:5f:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:53.718 20:13:06 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:5f:00.0 00:03:53.718 20:13:06 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:03:53.718 20:13:06 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:03:53.718 20:13:06 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:53.718 20:13:06 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:53.718 20:13:06 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:53.718 20:13:06 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:03:53.718 20:13:06 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:53.718 20:13:06 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.718 20:13:06 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5f:00.0 00:03:53.718 20:13:06 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:53.718 20:13:06 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:53.718 20:13:06 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:03:56.346 20:13:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:5f:00.0 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:56.346 20:13:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:03:56.346 20:13:09 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:56.346 20:13:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.346 20:13:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:56.346 20:13:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.346 20:13:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:56.346 20:13:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.346 20:13:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:56.346 20:13:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.346 20:13:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:56.346 20:13:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.346 20:13:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:56.346 20:13:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.346 20:13:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:56.346 20:13:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.346 20:13:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:56.346 20:13:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.346 20:13:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:56.346 20:13:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.346 20:13:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:56.346 20:13:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.346 20:13:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:56.346 20:13:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.346 20:13:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:56.346 20:13:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.346 20:13:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:56.346 20:13:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.346 20:13:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:56.346 20:13:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.346 20:13:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:56.346 20:13:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.346 20:13:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:56.346 20:13:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.346 20:13:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:56.346 20:13:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.606 20:13:09 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:56.606 20:13:09 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount ]] 00:03:56.606 20:13:09 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:03:56.606 20:13:09 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:56.606 20:13:09 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:56.606 20:13:09 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:03:56.606 20:13:09 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:03:56.606 20:13:09 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:03:56.606 20:13:09 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:56.606 20:13:09 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:03:56.606 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:56.606 20:13:09 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:56.606 20:13:09 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:56.865 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:03:56.865 /dev/nvme0n1: 8 bytes were erased at offset 0x1749a955e00 (gpt): 45 46 49 20 50 41 52 54 00:03:56.865 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:03:56.865 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:03:56.865 20:13:09 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:03:56.865 20:13:09 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:03:56.865 20:13:09 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:03:56.865 20:13:09 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:03:56.865 20:13:09 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:03:56.865 20:13:09 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:03:56.865 20:13:09 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:5f:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:56.865 20:13:09 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:5f:00.0 00:03:56.865 20:13:09 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:03:56.865 20:13:09 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:03:56.865 20:13:09 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:56.865 20:13:09 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:56.865 20:13:09 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:56.865 20:13:09 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:03:56.865 20:13:09 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:56.865 20:13:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.865 20:13:09 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5f:00.0 00:03:56.865 20:13:09 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:56.865 20:13:09 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:56.865 20:13:09 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:04:00.149 20:13:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:5f:00.0 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:00.149 20:13:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:04:00.149 20:13:12 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:00.149 20:13:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.149 20:13:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:00.149 20:13:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.149 20:13:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:00.149 20:13:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.149 20:13:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:00.149 20:13:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.149 20:13:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:00.149 20:13:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.149 20:13:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:00.149 20:13:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.149 20:13:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:00.149 20:13:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.149 20:13:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:00.149 20:13:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.149 20:13:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:00.149 20:13:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.149 20:13:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:00.149 20:13:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.149 20:13:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:00.149 20:13:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.149 20:13:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:00.149 20:13:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.149 20:13:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:00.149 20:13:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.149 20:13:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:00.149 20:13:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.149 20:13:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:00.149 20:13:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.149 20:13:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:00.149 20:13:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.149 20:13:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:00.149 20:13:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.149 20:13:12 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:00.149 20:13:12 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:00.149 20:13:12 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:00.149 20:13:12 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:00.149 20:13:12 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:00.149 20:13:12 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:00.149 20:13:12 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:5f:00.0 data@nvme0n1 '' '' 00:04:00.149 20:13:12 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:5f:00.0 00:04:00.149 20:13:12 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:04:00.149 20:13:12 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:00.149 20:13:12 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:04:00.149 20:13:12 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:00.149 20:13:12 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:00.149 20:13:12 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:00.149 20:13:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.149 20:13:12 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5f:00.0 00:04:00.149 20:13:12 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:00.149 20:13:12 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:00.149 20:13:12 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:04:02.678 20:13:15 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:5f:00.0 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:02.678 20:13:15 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:04:02.678 20:13:15 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:02.678 20:13:15 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:02.678 20:13:15 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:02.678 20:13:15 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:02.678 20:13:15 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:02.678 20:13:15 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:02.678 20:13:15 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:02.678 20:13:15 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:02.678 20:13:15 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:02.678 20:13:15 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:02.678 20:13:15 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:02.678 20:13:15 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:02.678 20:13:15 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:02.678 20:13:15 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:02.678 20:13:15 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:02.678 20:13:15 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:02.678 20:13:15 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:02.678 20:13:15 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:02.678 20:13:15 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:02.678 20:13:15 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:02.678 20:13:15 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:02.678 20:13:15 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:02.678 20:13:15 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:02.678 20:13:15 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:02.678 20:13:15 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:02.678 20:13:15 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:02.678 20:13:15 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:02.678 20:13:15 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:02.678 20:13:15 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:02.678 20:13:15 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:02.678 20:13:15 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:02.678 20:13:15 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:02.678 20:13:15 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:02.678 20:13:15 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:02.938 20:13:15 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:02.938 20:13:15 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:02.938 20:13:15 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:04:02.938 20:13:15 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:04:02.938 20:13:15 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:02.938 20:13:15 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:02.938 20:13:15 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:02.938 20:13:15 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:02.938 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:02.938 00:04:02.938 real 0m11.426s 00:04:02.938 user 0m3.338s 00:04:02.938 sys 0m5.886s 00:04:02.938 20:13:15 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:02.938 20:13:15 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:04:02.938 ************************************ 00:04:02.938 END TEST nvme_mount 00:04:02.938 ************************************ 00:04:02.938 20:13:15 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:02.938 20:13:15 setup.sh.devices -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:02.938 20:13:15 setup.sh.devices -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:02.938 20:13:15 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:02.938 ************************************ 00:04:02.938 START TEST dm_mount 00:04:02.938 ************************************ 00:04:02.938 20:13:15 setup.sh.devices.dm_mount -- common/autotest_common.sh@1121 -- # dm_mount 00:04:02.938 20:13:15 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:04:02.938 20:13:15 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:04:02.938 20:13:15 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:04:02.938 20:13:15 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:04:02.938 20:13:15 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:02.938 20:13:15 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:04:02.938 20:13:15 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:02.938 20:13:15 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:02.938 20:13:15 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:04:02.938 20:13:15 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:04:02.938 20:13:15 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:02.938 20:13:15 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:02.938 20:13:15 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:02.938 20:13:15 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:02.938 20:13:15 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:02.938 20:13:15 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:02.938 20:13:15 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:02.938 20:13:15 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:02.938 20:13:15 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:02.938 20:13:15 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:02.938 20:13:15 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:04:03.873 Creating new GPT entries in memory. 00:04:03.873 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:03.873 other utilities. 00:04:03.873 20:13:16 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:03.873 20:13:16 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:03.873 20:13:16 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:03.873 20:13:16 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:03.873 20:13:16 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:05.246 Creating new GPT entries in memory. 00:04:05.246 The operation has completed successfully. 00:04:05.246 20:13:17 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:05.246 20:13:17 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:05.246 20:13:17 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:05.246 20:13:17 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:05.246 20:13:17 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:04:06.180 The operation has completed successfully. 00:04:06.180 20:13:18 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:06.180 20:13:18 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:06.180 20:13:18 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 2858526 00:04:06.180 20:13:18 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:06.180 20:13:18 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:04:06.180 20:13:18 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:06.181 20:13:18 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:06.181 20:13:18 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:04:06.181 20:13:18 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:06.181 20:13:18 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:04:06.181 20:13:18 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:06.181 20:13:18 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:06.181 20:13:18 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-2 00:04:06.181 20:13:18 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-2 00:04:06.181 20:13:18 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-2 ]] 00:04:06.181 20:13:18 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-2 ]] 00:04:06.181 20:13:18 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:04:06.181 20:13:18 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount size= 00:04:06.181 20:13:18 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:04:06.181 20:13:18 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:06.181 20:13:18 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:06.181 20:13:18 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:04:06.181 20:13:19 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:5f:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:06.181 20:13:19 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:5f:00.0 00:04:06.181 20:13:19 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:04:06.181 20:13:19 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:04:06.181 20:13:19 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:06.181 20:13:19 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:06.181 20:13:19 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:06.181 20:13:19 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:04:06.181 20:13:19 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:06.181 20:13:19 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.181 20:13:19 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5f:00.0 00:04:06.181 20:13:19 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:06.181 20:13:19 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:06.181 20:13:19 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:04:09.464 20:13:21 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:5f:00.0 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:09.464 20:13:21 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:04:09.464 20:13:21 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:09.464 20:13:21 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.464 20:13:21 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:09.464 20:13:21 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.464 20:13:21 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:09.464 20:13:21 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.464 20:13:21 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:09.464 20:13:21 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.464 20:13:21 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:09.464 20:13:21 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.464 20:13:21 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:09.464 20:13:21 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.464 20:13:21 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:09.464 20:13:21 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.464 20:13:21 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:09.464 20:13:21 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.464 20:13:21 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:09.464 20:13:21 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.464 20:13:21 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:09.464 20:13:21 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.464 20:13:21 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:09.464 20:13:21 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.464 20:13:21 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:09.464 20:13:21 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.464 20:13:21 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:09.465 20:13:21 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.465 20:13:21 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:09.465 20:13:21 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.465 20:13:21 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:09.465 20:13:21 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.465 20:13:21 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:09.465 20:13:21 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.465 20:13:21 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:09.465 20:13:21 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.465 20:13:22 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:09.465 20:13:22 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount ]] 00:04:09.465 20:13:22 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:04:09.465 20:13:22 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:09.465 20:13:22 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:09.465 20:13:22 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:04:09.465 20:13:22 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:5f:00.0 holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2 '' '' 00:04:09.465 20:13:22 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:5f:00.0 00:04:09.465 20:13:22 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2 00:04:09.465 20:13:22 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:09.465 20:13:22 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:04:09.465 20:13:22 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:09.465 20:13:22 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:09.465 20:13:22 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:09.465 20:13:22 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.465 20:13:22 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5f:00.0 00:04:09.465 20:13:22 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:09.465 20:13:22 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:09.465 20:13:22 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:04:11.995 20:13:24 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:5f:00.0 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:11.995 20:13:24 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\2\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\2* ]] 00:04:11.995 20:13:24 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:11.995 20:13:24 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:11.995 20:13:24 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:11.995 20:13:24 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:11.995 20:13:24 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:11.995 20:13:24 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:11.995 20:13:24 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:11.995 20:13:24 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:11.995 20:13:24 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:11.995 20:13:24 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:11.995 20:13:24 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:11.995 20:13:24 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:11.995 20:13:24 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:11.995 20:13:24 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:11.995 20:13:24 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:11.995 20:13:24 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:11.995 20:13:24 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:11.995 20:13:24 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:11.995 20:13:24 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:11.995 20:13:24 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:11.995 20:13:24 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:11.995 20:13:24 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:11.995 20:13:24 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:11.995 20:13:24 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:11.995 20:13:24 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:11.995 20:13:24 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:11.995 20:13:24 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:11.995 20:13:24 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:11.995 20:13:24 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:11.995 20:13:24 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:11.995 20:13:24 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:11.995 20:13:24 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:11.995 20:13:24 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:11.995 20:13:24 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:11.995 20:13:24 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:11.995 20:13:24 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:11.995 20:13:24 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:04:11.995 20:13:24 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:04:11.995 20:13:24 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:04:11.995 20:13:24 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:11.995 20:13:24 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:04:11.995 20:13:24 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:11.995 20:13:24 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:04:11.995 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:11.995 20:13:24 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:11.995 20:13:24 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:04:11.995 00:04:11.995 real 0m8.929s 00:04:11.995 user 0m2.042s 00:04:11.995 sys 0m3.721s 00:04:11.995 20:13:24 setup.sh.devices.dm_mount -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:11.995 20:13:24 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:04:11.995 ************************************ 00:04:11.995 END TEST dm_mount 00:04:11.995 ************************************ 00:04:11.995 20:13:24 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:04:11.995 20:13:24 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:04:11.995 20:13:24 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:11.995 20:13:24 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:11.995 20:13:24 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:11.995 20:13:24 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:11.995 20:13:24 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:12.254 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:12.254 /dev/nvme0n1: 8 bytes were erased at offset 0x1749a955e00 (gpt): 45 46 49 20 50 41 52 54 00:04:12.254 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:12.254 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:12.254 20:13:25 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:04:12.254 20:13:25 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:04:12.254 20:13:25 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:12.254 20:13:25 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:12.254 20:13:25 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:12.254 20:13:25 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:04:12.254 20:13:25 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:04:12.254 00:04:12.254 real 0m24.203s 00:04:12.254 user 0m6.718s 00:04:12.254 sys 0m11.968s 00:04:12.254 20:13:25 setup.sh.devices -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:12.254 20:13:25 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:12.254 ************************************ 00:04:12.254 END TEST devices 00:04:12.254 ************************************ 00:04:12.254 00:04:12.254 real 1m22.478s 00:04:12.254 user 0m25.860s 00:04:12.254 sys 0m44.991s 00:04:12.254 20:13:25 setup.sh -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:12.254 20:13:25 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:12.254 ************************************ 00:04:12.254 END TEST setup.sh 00:04:12.254 ************************************ 00:04:12.254 20:13:25 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh status 00:04:15.539 Hugepages 00:04:15.539 node hugesize free / total 00:04:15.539 node0 1048576kB 0 / 0 00:04:15.539 node0 2048kB 2048 / 2048 00:04:15.539 node1 1048576kB 0 / 0 00:04:15.539 node1 2048kB 0 / 0 00:04:15.539 00:04:15.539 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:15.539 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:04:15.539 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:04:15.539 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:04:15.539 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:04:15.539 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:04:15.539 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:04:15.539 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:04:15.539 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:04:15.539 NVMe 0000:5f:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:04:15.539 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:04:15.540 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:04:15.540 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:04:15.540 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:04:15.540 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:04:15.540 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:04:15.540 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:04:15.540 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:04:15.540 20:13:28 -- spdk/autotest.sh@130 -- # uname -s 00:04:15.540 20:13:28 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:04:15.540 20:13:28 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:04:15.540 20:13:28 -- common/autotest_common.sh@1527 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:04:18.822 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:18.822 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:18.822 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:18.822 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:18.822 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:18.822 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:18.822 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:18.822 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:18.822 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:18.822 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:18.822 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:18.822 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:18.822 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:18.822 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:18.822 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:18.822 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:20.197 0000:5f:00.0 (8086 0a54): nvme -> vfio-pci 00:04:20.197 20:13:32 -- common/autotest_common.sh@1528 -- # sleep 1 00:04:21.131 20:13:33 -- common/autotest_common.sh@1529 -- # bdfs=() 00:04:21.131 20:13:33 -- common/autotest_common.sh@1529 -- # local bdfs 00:04:21.131 20:13:33 -- common/autotest_common.sh@1530 -- # bdfs=($(get_nvme_bdfs)) 00:04:21.131 20:13:33 -- common/autotest_common.sh@1530 -- # get_nvme_bdfs 00:04:21.131 20:13:33 -- common/autotest_common.sh@1509 -- # bdfs=() 00:04:21.131 20:13:33 -- common/autotest_common.sh@1509 -- # local bdfs 00:04:21.131 20:13:33 -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:21.131 20:13:33 -- common/autotest_common.sh@1510 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:21.131 20:13:33 -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:04:21.131 20:13:34 -- common/autotest_common.sh@1511 -- # (( 1 == 0 )) 00:04:21.131 20:13:34 -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:5f:00.0 00:04:21.131 20:13:34 -- common/autotest_common.sh@1532 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:04:24.415 Waiting for block devices as requested 00:04:24.415 0000:5f:00.0 (8086 0a54): vfio-pci -> nvme 00:04:24.415 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:04:24.415 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:04:24.415 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:04:24.415 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:04:24.415 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:04:24.415 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:04:24.415 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:04:24.415 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:04:24.673 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:04:24.673 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:04:24.673 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:04:24.673 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:04:24.932 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:04:24.932 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:04:24.932 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:04:25.191 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:04:25.191 20:13:38 -- common/autotest_common.sh@1534 -- # for bdf in "${bdfs[@]}" 00:04:25.191 20:13:38 -- common/autotest_common.sh@1535 -- # get_nvme_ctrlr_from_bdf 0000:5f:00.0 00:04:25.191 20:13:38 -- common/autotest_common.sh@1498 -- # readlink -f /sys/class/nvme/nvme0 00:04:25.191 20:13:38 -- common/autotest_common.sh@1498 -- # grep 0000:5f:00.0/nvme/nvme 00:04:25.191 20:13:38 -- common/autotest_common.sh@1498 -- # bdf_sysfs_path=/sys/devices/pci0000:5d/0000:5d:03.0/0000:5f:00.0/nvme/nvme0 00:04:25.191 20:13:38 -- common/autotest_common.sh@1499 -- # [[ -z /sys/devices/pci0000:5d/0000:5d:03.0/0000:5f:00.0/nvme/nvme0 ]] 00:04:25.191 20:13:38 -- common/autotest_common.sh@1503 -- # basename /sys/devices/pci0000:5d/0000:5d:03.0/0000:5f:00.0/nvme/nvme0 00:04:25.191 20:13:38 -- common/autotest_common.sh@1503 -- # printf '%s\n' nvme0 00:04:25.191 20:13:38 -- common/autotest_common.sh@1535 -- # nvme_ctrlr=/dev/nvme0 00:04:25.191 20:13:38 -- common/autotest_common.sh@1536 -- # [[ -z /dev/nvme0 ]] 00:04:25.191 20:13:38 -- common/autotest_common.sh@1541 -- # nvme id-ctrl /dev/nvme0 00:04:25.191 20:13:38 -- common/autotest_common.sh@1541 -- # grep oacs 00:04:25.191 20:13:38 -- common/autotest_common.sh@1541 -- # cut -d: -f2 00:04:25.191 20:13:38 -- common/autotest_common.sh@1541 -- # oacs=' 0xe' 00:04:25.191 20:13:38 -- common/autotest_common.sh@1542 -- # oacs_ns_manage=8 00:04:25.191 20:13:38 -- common/autotest_common.sh@1544 -- # [[ 8 -ne 0 ]] 00:04:25.191 20:13:38 -- common/autotest_common.sh@1550 -- # nvme id-ctrl /dev/nvme0 00:04:25.191 20:13:38 -- common/autotest_common.sh@1550 -- # grep unvmcap 00:04:25.191 20:13:38 -- common/autotest_common.sh@1550 -- # cut -d: -f2 00:04:25.191 20:13:38 -- common/autotest_common.sh@1550 -- # unvmcap=' 0' 00:04:25.191 20:13:38 -- common/autotest_common.sh@1551 -- # [[ 0 -eq 0 ]] 00:04:25.191 20:13:38 -- common/autotest_common.sh@1553 -- # continue 00:04:25.191 20:13:38 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:04:25.191 20:13:38 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:25.191 20:13:38 -- common/autotest_common.sh@10 -- # set +x 00:04:25.191 20:13:38 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:04:25.191 20:13:38 -- common/autotest_common.sh@720 -- # xtrace_disable 00:04:25.191 20:13:38 -- common/autotest_common.sh@10 -- # set +x 00:04:25.191 20:13:38 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:04:28.476 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:28.476 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:28.476 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:28.476 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:28.476 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:28.476 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:28.476 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:28.476 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:28.476 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:28.476 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:28.477 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:28.477 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:28.477 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:28.477 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:28.477 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:28.477 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:29.854 0000:5f:00.0 (8086 0a54): nvme -> vfio-pci 00:04:29.854 20:13:42 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:04:29.854 20:13:42 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:29.854 20:13:42 -- common/autotest_common.sh@10 -- # set +x 00:04:29.854 20:13:42 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:04:29.854 20:13:42 -- common/autotest_common.sh@1587 -- # mapfile -t bdfs 00:04:29.854 20:13:42 -- common/autotest_common.sh@1587 -- # get_nvme_bdfs_by_id 0x0a54 00:04:29.854 20:13:42 -- common/autotest_common.sh@1573 -- # bdfs=() 00:04:29.854 20:13:42 -- common/autotest_common.sh@1573 -- # local bdfs 00:04:29.854 20:13:42 -- common/autotest_common.sh@1575 -- # get_nvme_bdfs 00:04:29.854 20:13:42 -- common/autotest_common.sh@1509 -- # bdfs=() 00:04:29.854 20:13:42 -- common/autotest_common.sh@1509 -- # local bdfs 00:04:29.854 20:13:42 -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:29.854 20:13:42 -- common/autotest_common.sh@1510 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:29.854 20:13:42 -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:04:29.854 20:13:42 -- common/autotest_common.sh@1511 -- # (( 1 == 0 )) 00:04:29.854 20:13:42 -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:5f:00.0 00:04:29.854 20:13:42 -- common/autotest_common.sh@1575 -- # for bdf in $(get_nvme_bdfs) 00:04:29.854 20:13:42 -- common/autotest_common.sh@1576 -- # cat /sys/bus/pci/devices/0000:5f:00.0/device 00:04:29.854 20:13:42 -- common/autotest_common.sh@1576 -- # device=0x0a54 00:04:29.854 20:13:42 -- common/autotest_common.sh@1577 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:04:29.854 20:13:42 -- common/autotest_common.sh@1578 -- # bdfs+=($bdf) 00:04:29.854 20:13:42 -- common/autotest_common.sh@1582 -- # printf '%s\n' 0000:5f:00.0 00:04:29.854 20:13:42 -- common/autotest_common.sh@1588 -- # [[ -z 0000:5f:00.0 ]] 00:04:29.854 20:13:42 -- common/autotest_common.sh@1593 -- # spdk_tgt_pid=2868369 00:04:29.854 20:13:42 -- common/autotest_common.sh@1594 -- # waitforlisten 2868369 00:04:29.854 20:13:42 -- common/autotest_common.sh@827 -- # '[' -z 2868369 ']' 00:04:29.854 20:13:42 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:29.854 20:13:42 -- common/autotest_common.sh@832 -- # local max_retries=100 00:04:29.854 20:13:42 -- common/autotest_common.sh@1592 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:04:29.854 20:13:42 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:29.854 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:29.854 20:13:42 -- common/autotest_common.sh@836 -- # xtrace_disable 00:04:29.854 20:13:42 -- common/autotest_common.sh@10 -- # set +x 00:04:29.854 [2024-05-16 20:13:42.734196] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:04:29.854 [2024-05-16 20:13:42.734240] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2868369 ] 00:04:29.854 EAL: No free 2048 kB hugepages reported on node 1 00:04:29.854 [2024-05-16 20:13:42.793302] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:30.113 [2024-05-16 20:13:42.874040] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:30.680 20:13:43 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:04:30.680 20:13:43 -- common/autotest_common.sh@860 -- # return 0 00:04:30.680 20:13:43 -- common/autotest_common.sh@1596 -- # bdf_id=0 00:04:30.680 20:13:43 -- common/autotest_common.sh@1597 -- # for bdf in "${bdfs[@]}" 00:04:30.680 20:13:43 -- common/autotest_common.sh@1598 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:5f:00.0 00:04:33.985 nvme0n1 00:04:33.985 20:13:46 -- common/autotest_common.sh@1600 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:04:33.985 [2024-05-16 20:13:46.680404] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:04:33.985 request: 00:04:33.985 { 00:04:33.985 "nvme_ctrlr_name": "nvme0", 00:04:33.985 "password": "test", 00:04:33.985 "method": "bdev_nvme_opal_revert", 00:04:33.985 "req_id": 1 00:04:33.985 } 00:04:33.985 Got JSON-RPC error response 00:04:33.985 response: 00:04:33.985 { 00:04:33.985 "code": -32602, 00:04:33.985 "message": "Invalid parameters" 00:04:33.985 } 00:04:33.985 20:13:46 -- common/autotest_common.sh@1600 -- # true 00:04:33.985 20:13:46 -- common/autotest_common.sh@1601 -- # (( ++bdf_id )) 00:04:33.985 20:13:46 -- common/autotest_common.sh@1604 -- # killprocess 2868369 00:04:33.985 20:13:46 -- common/autotest_common.sh@946 -- # '[' -z 2868369 ']' 00:04:33.985 20:13:46 -- common/autotest_common.sh@950 -- # kill -0 2868369 00:04:33.985 20:13:46 -- common/autotest_common.sh@951 -- # uname 00:04:33.985 20:13:46 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:04:33.985 20:13:46 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2868369 00:04:33.985 20:13:46 -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:04:33.985 20:13:46 -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:04:33.985 20:13:46 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2868369' 00:04:33.985 killing process with pid 2868369 00:04:33.985 20:13:46 -- common/autotest_common.sh@965 -- # kill 2868369 00:04:33.985 20:13:46 -- common/autotest_common.sh@970 -- # wait 2868369 00:04:36.518 20:13:48 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:04:36.518 20:13:48 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:04:36.518 20:13:48 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:36.518 20:13:48 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:36.518 20:13:48 -- spdk/autotest.sh@162 -- # timing_enter lib 00:04:36.518 20:13:48 -- common/autotest_common.sh@720 -- # xtrace_disable 00:04:36.518 20:13:48 -- common/autotest_common.sh@10 -- # set +x 00:04:36.518 20:13:48 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:04:36.518 20:13:48 -- spdk/autotest.sh@168 -- # run_test env /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/env.sh 00:04:36.518 20:13:48 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:36.518 20:13:48 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:36.518 20:13:48 -- common/autotest_common.sh@10 -- # set +x 00:04:36.518 ************************************ 00:04:36.518 START TEST env 00:04:36.518 ************************************ 00:04:36.518 20:13:48 env -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/env.sh 00:04:36.518 * Looking for test storage... 00:04:36.518 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env 00:04:36.518 20:13:49 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/memory/memory_ut 00:04:36.518 20:13:49 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:36.518 20:13:49 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:36.518 20:13:49 env -- common/autotest_common.sh@10 -- # set +x 00:04:36.518 ************************************ 00:04:36.518 START TEST env_memory 00:04:36.518 ************************************ 00:04:36.518 20:13:49 env.env_memory -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/memory/memory_ut 00:04:36.518 00:04:36.518 00:04:36.518 CUnit - A unit testing framework for C - Version 2.1-3 00:04:36.518 http://cunit.sourceforge.net/ 00:04:36.518 00:04:36.518 00:04:36.518 Suite: memory 00:04:36.518 Test: alloc and free memory map ...[2024-05-16 20:13:49.077968] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:36.518 passed 00:04:36.518 Test: mem map translation ...[2024-05-16 20:13:49.096895] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:36.518 [2024-05-16 20:13:49.096909] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:36.518 [2024-05-16 20:13:49.096959] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:36.518 [2024-05-16 20:13:49.096966] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:36.518 passed 00:04:36.518 Test: mem map registration ...[2024-05-16 20:13:49.134696] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:04:36.518 [2024-05-16 20:13:49.134714] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:04:36.518 passed 00:04:36.518 Test: mem map adjacent registrations ...passed 00:04:36.518 00:04:36.518 Run Summary: Type Total Ran Passed Failed Inactive 00:04:36.518 suites 1 1 n/a 0 0 00:04:36.518 tests 4 4 4 0 0 00:04:36.518 asserts 152 152 152 0 n/a 00:04:36.518 00:04:36.518 Elapsed time = 0.130 seconds 00:04:36.518 00:04:36.518 real 0m0.136s 00:04:36.518 user 0m0.130s 00:04:36.518 sys 0m0.005s 00:04:36.518 20:13:49 env.env_memory -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:36.518 20:13:49 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:36.518 ************************************ 00:04:36.518 END TEST env_memory 00:04:36.518 ************************************ 00:04:36.518 20:13:49 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:36.518 20:13:49 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:36.518 20:13:49 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:36.518 20:13:49 env -- common/autotest_common.sh@10 -- # set +x 00:04:36.518 ************************************ 00:04:36.518 START TEST env_vtophys 00:04:36.518 ************************************ 00:04:36.518 20:13:49 env.env_vtophys -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:36.518 EAL: lib.eal log level changed from notice to debug 00:04:36.518 EAL: Detected lcore 0 as core 0 on socket 0 00:04:36.518 EAL: Detected lcore 1 as core 1 on socket 0 00:04:36.518 EAL: Detected lcore 2 as core 2 on socket 0 00:04:36.518 EAL: Detected lcore 3 as core 3 on socket 0 00:04:36.518 EAL: Detected lcore 4 as core 4 on socket 0 00:04:36.518 EAL: Detected lcore 5 as core 5 on socket 0 00:04:36.518 EAL: Detected lcore 6 as core 6 on socket 0 00:04:36.518 EAL: Detected lcore 7 as core 9 on socket 0 00:04:36.518 EAL: Detected lcore 8 as core 10 on socket 0 00:04:36.518 EAL: Detected lcore 9 as core 11 on socket 0 00:04:36.518 EAL: Detected lcore 10 as core 12 on socket 0 00:04:36.518 EAL: Detected lcore 11 as core 13 on socket 0 00:04:36.518 EAL: Detected lcore 12 as core 16 on socket 0 00:04:36.518 EAL: Detected lcore 13 as core 17 on socket 0 00:04:36.518 EAL: Detected lcore 14 as core 18 on socket 0 00:04:36.518 EAL: Detected lcore 15 as core 19 on socket 0 00:04:36.518 EAL: Detected lcore 16 as core 20 on socket 0 00:04:36.518 EAL: Detected lcore 17 as core 21 on socket 0 00:04:36.518 EAL: Detected lcore 18 as core 24 on socket 0 00:04:36.518 EAL: Detected lcore 19 as core 25 on socket 0 00:04:36.518 EAL: Detected lcore 20 as core 26 on socket 0 00:04:36.518 EAL: Detected lcore 21 as core 27 on socket 0 00:04:36.518 EAL: Detected lcore 22 as core 28 on socket 0 00:04:36.518 EAL: Detected lcore 23 as core 29 on socket 0 00:04:36.518 EAL: Detected lcore 24 as core 0 on socket 1 00:04:36.518 EAL: Detected lcore 25 as core 1 on socket 1 00:04:36.518 EAL: Detected lcore 26 as core 2 on socket 1 00:04:36.518 EAL: Detected lcore 27 as core 3 on socket 1 00:04:36.518 EAL: Detected lcore 28 as core 4 on socket 1 00:04:36.518 EAL: Detected lcore 29 as core 5 on socket 1 00:04:36.518 EAL: Detected lcore 30 as core 6 on socket 1 00:04:36.518 EAL: Detected lcore 31 as core 8 on socket 1 00:04:36.518 EAL: Detected lcore 32 as core 9 on socket 1 00:04:36.518 EAL: Detected lcore 33 as core 10 on socket 1 00:04:36.518 EAL: Detected lcore 34 as core 11 on socket 1 00:04:36.518 EAL: Detected lcore 35 as core 12 on socket 1 00:04:36.518 EAL: Detected lcore 36 as core 13 on socket 1 00:04:36.518 EAL: Detected lcore 37 as core 16 on socket 1 00:04:36.518 EAL: Detected lcore 38 as core 17 on socket 1 00:04:36.518 EAL: Detected lcore 39 as core 18 on socket 1 00:04:36.518 EAL: Detected lcore 40 as core 19 on socket 1 00:04:36.518 EAL: Detected lcore 41 as core 20 on socket 1 00:04:36.518 EAL: Detected lcore 42 as core 21 on socket 1 00:04:36.518 EAL: Detected lcore 43 as core 25 on socket 1 00:04:36.518 EAL: Detected lcore 44 as core 26 on socket 1 00:04:36.518 EAL: Detected lcore 45 as core 27 on socket 1 00:04:36.518 EAL: Detected lcore 46 as core 28 on socket 1 00:04:36.518 EAL: Detected lcore 47 as core 29 on socket 1 00:04:36.518 EAL: Detected lcore 48 as core 0 on socket 0 00:04:36.518 EAL: Detected lcore 49 as core 1 on socket 0 00:04:36.518 EAL: Detected lcore 50 as core 2 on socket 0 00:04:36.518 EAL: Detected lcore 51 as core 3 on socket 0 00:04:36.518 EAL: Detected lcore 52 as core 4 on socket 0 00:04:36.518 EAL: Detected lcore 53 as core 5 on socket 0 00:04:36.518 EAL: Detected lcore 54 as core 6 on socket 0 00:04:36.518 EAL: Detected lcore 55 as core 9 on socket 0 00:04:36.518 EAL: Detected lcore 56 as core 10 on socket 0 00:04:36.518 EAL: Detected lcore 57 as core 11 on socket 0 00:04:36.518 EAL: Detected lcore 58 as core 12 on socket 0 00:04:36.518 EAL: Detected lcore 59 as core 13 on socket 0 00:04:36.518 EAL: Detected lcore 60 as core 16 on socket 0 00:04:36.518 EAL: Detected lcore 61 as core 17 on socket 0 00:04:36.518 EAL: Detected lcore 62 as core 18 on socket 0 00:04:36.518 EAL: Detected lcore 63 as core 19 on socket 0 00:04:36.518 EAL: Detected lcore 64 as core 20 on socket 0 00:04:36.519 EAL: Detected lcore 65 as core 21 on socket 0 00:04:36.519 EAL: Detected lcore 66 as core 24 on socket 0 00:04:36.519 EAL: Detected lcore 67 as core 25 on socket 0 00:04:36.519 EAL: Detected lcore 68 as core 26 on socket 0 00:04:36.519 EAL: Detected lcore 69 as core 27 on socket 0 00:04:36.519 EAL: Detected lcore 70 as core 28 on socket 0 00:04:36.519 EAL: Detected lcore 71 as core 29 on socket 0 00:04:36.519 EAL: Detected lcore 72 as core 0 on socket 1 00:04:36.519 EAL: Detected lcore 73 as core 1 on socket 1 00:04:36.519 EAL: Detected lcore 74 as core 2 on socket 1 00:04:36.519 EAL: Detected lcore 75 as core 3 on socket 1 00:04:36.519 EAL: Detected lcore 76 as core 4 on socket 1 00:04:36.519 EAL: Detected lcore 77 as core 5 on socket 1 00:04:36.519 EAL: Detected lcore 78 as core 6 on socket 1 00:04:36.519 EAL: Detected lcore 79 as core 8 on socket 1 00:04:36.519 EAL: Detected lcore 80 as core 9 on socket 1 00:04:36.519 EAL: Detected lcore 81 as core 10 on socket 1 00:04:36.519 EAL: Detected lcore 82 as core 11 on socket 1 00:04:36.519 EAL: Detected lcore 83 as core 12 on socket 1 00:04:36.519 EAL: Detected lcore 84 as core 13 on socket 1 00:04:36.519 EAL: Detected lcore 85 as core 16 on socket 1 00:04:36.519 EAL: Detected lcore 86 as core 17 on socket 1 00:04:36.519 EAL: Detected lcore 87 as core 18 on socket 1 00:04:36.519 EAL: Detected lcore 88 as core 19 on socket 1 00:04:36.519 EAL: Detected lcore 89 as core 20 on socket 1 00:04:36.519 EAL: Detected lcore 90 as core 21 on socket 1 00:04:36.519 EAL: Detected lcore 91 as core 25 on socket 1 00:04:36.519 EAL: Detected lcore 92 as core 26 on socket 1 00:04:36.519 EAL: Detected lcore 93 as core 27 on socket 1 00:04:36.519 EAL: Detected lcore 94 as core 28 on socket 1 00:04:36.519 EAL: Detected lcore 95 as core 29 on socket 1 00:04:36.519 EAL: Maximum logical cores by configuration: 128 00:04:36.519 EAL: Detected CPU lcores: 96 00:04:36.519 EAL: Detected NUMA nodes: 2 00:04:36.519 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:36.519 EAL: Detected shared linkage of DPDK 00:04:36.519 EAL: No shared files mode enabled, IPC will be disabled 00:04:36.519 EAL: Bus pci wants IOVA as 'DC' 00:04:36.519 EAL: Buses did not request a specific IOVA mode. 00:04:36.519 EAL: IOMMU is available, selecting IOVA as VA mode. 00:04:36.519 EAL: Selected IOVA mode 'VA' 00:04:36.519 EAL: No free 2048 kB hugepages reported on node 1 00:04:36.519 EAL: Probing VFIO support... 00:04:36.519 EAL: IOMMU type 1 (Type 1) is supported 00:04:36.519 EAL: IOMMU type 7 (sPAPR) is not supported 00:04:36.519 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:04:36.519 EAL: VFIO support initialized 00:04:36.519 EAL: Ask a virtual area of 0x2e000 bytes 00:04:36.519 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:36.519 EAL: Setting up physically contiguous memory... 00:04:36.519 EAL: Setting maximum number of open files to 524288 00:04:36.519 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:36.519 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:04:36.519 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:36.519 EAL: Ask a virtual area of 0x61000 bytes 00:04:36.519 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:36.519 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:36.519 EAL: Ask a virtual area of 0x400000000 bytes 00:04:36.519 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:36.519 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:36.519 EAL: Ask a virtual area of 0x61000 bytes 00:04:36.519 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:36.519 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:36.519 EAL: Ask a virtual area of 0x400000000 bytes 00:04:36.519 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:36.519 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:36.519 EAL: Ask a virtual area of 0x61000 bytes 00:04:36.519 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:36.519 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:36.519 EAL: Ask a virtual area of 0x400000000 bytes 00:04:36.519 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:36.519 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:36.519 EAL: Ask a virtual area of 0x61000 bytes 00:04:36.519 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:36.519 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:36.519 EAL: Ask a virtual area of 0x400000000 bytes 00:04:36.519 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:36.519 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:36.519 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:04:36.519 EAL: Ask a virtual area of 0x61000 bytes 00:04:36.519 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:04:36.519 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:36.519 EAL: Ask a virtual area of 0x400000000 bytes 00:04:36.519 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:04:36.519 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:04:36.519 EAL: Ask a virtual area of 0x61000 bytes 00:04:36.519 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:04:36.519 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:36.519 EAL: Ask a virtual area of 0x400000000 bytes 00:04:36.519 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:04:36.519 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:04:36.519 EAL: Ask a virtual area of 0x61000 bytes 00:04:36.519 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:04:36.519 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:36.519 EAL: Ask a virtual area of 0x400000000 bytes 00:04:36.519 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:04:36.519 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:04:36.519 EAL: Ask a virtual area of 0x61000 bytes 00:04:36.519 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:04:36.519 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:36.519 EAL: Ask a virtual area of 0x400000000 bytes 00:04:36.519 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:04:36.519 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:04:36.519 EAL: Hugepages will be freed exactly as allocated. 00:04:36.519 EAL: No shared files mode enabled, IPC is disabled 00:04:36.519 EAL: No shared files mode enabled, IPC is disabled 00:04:36.519 EAL: TSC frequency is ~2100000 KHz 00:04:36.519 EAL: Main lcore 0 is ready (tid=7f21aef3da00;cpuset=[0]) 00:04:36.519 EAL: Trying to obtain current memory policy. 00:04:36.519 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:36.519 EAL: Restoring previous memory policy: 0 00:04:36.519 EAL: request: mp_malloc_sync 00:04:36.519 EAL: No shared files mode enabled, IPC is disabled 00:04:36.519 EAL: Heap on socket 0 was expanded by 2MB 00:04:36.519 EAL: No shared files mode enabled, IPC is disabled 00:04:36.519 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:36.519 EAL: Mem event callback 'spdk:(nil)' registered 00:04:36.519 00:04:36.519 00:04:36.519 CUnit - A unit testing framework for C - Version 2.1-3 00:04:36.519 http://cunit.sourceforge.net/ 00:04:36.519 00:04:36.519 00:04:36.519 Suite: components_suite 00:04:36.519 Test: vtophys_malloc_test ...passed 00:04:36.519 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:36.519 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:36.519 EAL: Restoring previous memory policy: 4 00:04:36.519 EAL: Calling mem event callback 'spdk:(nil)' 00:04:36.519 EAL: request: mp_malloc_sync 00:04:36.519 EAL: No shared files mode enabled, IPC is disabled 00:04:36.519 EAL: Heap on socket 0 was expanded by 4MB 00:04:36.519 EAL: Calling mem event callback 'spdk:(nil)' 00:04:36.519 EAL: request: mp_malloc_sync 00:04:36.519 EAL: No shared files mode enabled, IPC is disabled 00:04:36.519 EAL: Heap on socket 0 was shrunk by 4MB 00:04:36.519 EAL: Trying to obtain current memory policy. 00:04:36.519 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:36.519 EAL: Restoring previous memory policy: 4 00:04:36.519 EAL: Calling mem event callback 'spdk:(nil)' 00:04:36.519 EAL: request: mp_malloc_sync 00:04:36.519 EAL: No shared files mode enabled, IPC is disabled 00:04:36.519 EAL: Heap on socket 0 was expanded by 6MB 00:04:36.519 EAL: Calling mem event callback 'spdk:(nil)' 00:04:36.519 EAL: request: mp_malloc_sync 00:04:36.519 EAL: No shared files mode enabled, IPC is disabled 00:04:36.519 EAL: Heap on socket 0 was shrunk by 6MB 00:04:36.519 EAL: Trying to obtain current memory policy. 00:04:36.519 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:36.519 EAL: Restoring previous memory policy: 4 00:04:36.519 EAL: Calling mem event callback 'spdk:(nil)' 00:04:36.519 EAL: request: mp_malloc_sync 00:04:36.519 EAL: No shared files mode enabled, IPC is disabled 00:04:36.519 EAL: Heap on socket 0 was expanded by 10MB 00:04:36.519 EAL: Calling mem event callback 'spdk:(nil)' 00:04:36.519 EAL: request: mp_malloc_sync 00:04:36.519 EAL: No shared files mode enabled, IPC is disabled 00:04:36.519 EAL: Heap on socket 0 was shrunk by 10MB 00:04:36.519 EAL: Trying to obtain current memory policy. 00:04:36.519 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:36.519 EAL: Restoring previous memory policy: 4 00:04:36.519 EAL: Calling mem event callback 'spdk:(nil)' 00:04:36.520 EAL: request: mp_malloc_sync 00:04:36.520 EAL: No shared files mode enabled, IPC is disabled 00:04:36.520 EAL: Heap on socket 0 was expanded by 18MB 00:04:36.520 EAL: Calling mem event callback 'spdk:(nil)' 00:04:36.520 EAL: request: mp_malloc_sync 00:04:36.520 EAL: No shared files mode enabled, IPC is disabled 00:04:36.520 EAL: Heap on socket 0 was shrunk by 18MB 00:04:36.520 EAL: Trying to obtain current memory policy. 00:04:36.520 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:36.520 EAL: Restoring previous memory policy: 4 00:04:36.520 EAL: Calling mem event callback 'spdk:(nil)' 00:04:36.520 EAL: request: mp_malloc_sync 00:04:36.520 EAL: No shared files mode enabled, IPC is disabled 00:04:36.520 EAL: Heap on socket 0 was expanded by 34MB 00:04:36.520 EAL: Calling mem event callback 'spdk:(nil)' 00:04:36.520 EAL: request: mp_malloc_sync 00:04:36.520 EAL: No shared files mode enabled, IPC is disabled 00:04:36.520 EAL: Heap on socket 0 was shrunk by 34MB 00:04:36.520 EAL: Trying to obtain current memory policy. 00:04:36.520 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:36.520 EAL: Restoring previous memory policy: 4 00:04:36.520 EAL: Calling mem event callback 'spdk:(nil)' 00:04:36.520 EAL: request: mp_malloc_sync 00:04:36.520 EAL: No shared files mode enabled, IPC is disabled 00:04:36.520 EAL: Heap on socket 0 was expanded by 66MB 00:04:36.520 EAL: Calling mem event callback 'spdk:(nil)' 00:04:36.520 EAL: request: mp_malloc_sync 00:04:36.520 EAL: No shared files mode enabled, IPC is disabled 00:04:36.520 EAL: Heap on socket 0 was shrunk by 66MB 00:04:36.520 EAL: Trying to obtain current memory policy. 00:04:36.520 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:36.520 EAL: Restoring previous memory policy: 4 00:04:36.520 EAL: Calling mem event callback 'spdk:(nil)' 00:04:36.520 EAL: request: mp_malloc_sync 00:04:36.520 EAL: No shared files mode enabled, IPC is disabled 00:04:36.520 EAL: Heap on socket 0 was expanded by 130MB 00:04:36.520 EAL: Calling mem event callback 'spdk:(nil)' 00:04:36.520 EAL: request: mp_malloc_sync 00:04:36.520 EAL: No shared files mode enabled, IPC is disabled 00:04:36.520 EAL: Heap on socket 0 was shrunk by 130MB 00:04:36.520 EAL: Trying to obtain current memory policy. 00:04:36.520 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:36.520 EAL: Restoring previous memory policy: 4 00:04:36.520 EAL: Calling mem event callback 'spdk:(nil)' 00:04:36.520 EAL: request: mp_malloc_sync 00:04:36.520 EAL: No shared files mode enabled, IPC is disabled 00:04:36.520 EAL: Heap on socket 0 was expanded by 258MB 00:04:36.778 EAL: Calling mem event callback 'spdk:(nil)' 00:04:36.778 EAL: request: mp_malloc_sync 00:04:36.778 EAL: No shared files mode enabled, IPC is disabled 00:04:36.778 EAL: Heap on socket 0 was shrunk by 258MB 00:04:36.778 EAL: Trying to obtain current memory policy. 00:04:36.778 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:36.778 EAL: Restoring previous memory policy: 4 00:04:36.778 EAL: Calling mem event callback 'spdk:(nil)' 00:04:36.778 EAL: request: mp_malloc_sync 00:04:36.778 EAL: No shared files mode enabled, IPC is disabled 00:04:36.778 EAL: Heap on socket 0 was expanded by 514MB 00:04:36.778 EAL: Calling mem event callback 'spdk:(nil)' 00:04:37.036 EAL: request: mp_malloc_sync 00:04:37.036 EAL: No shared files mode enabled, IPC is disabled 00:04:37.036 EAL: Heap on socket 0 was shrunk by 514MB 00:04:37.036 EAL: Trying to obtain current memory policy. 00:04:37.036 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:37.036 EAL: Restoring previous memory policy: 4 00:04:37.036 EAL: Calling mem event callback 'spdk:(nil)' 00:04:37.036 EAL: request: mp_malloc_sync 00:04:37.036 EAL: No shared files mode enabled, IPC is disabled 00:04:37.036 EAL: Heap on socket 0 was expanded by 1026MB 00:04:37.294 EAL: Calling mem event callback 'spdk:(nil)' 00:04:37.554 EAL: request: mp_malloc_sync 00:04:37.554 EAL: No shared files mode enabled, IPC is disabled 00:04:37.554 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:37.554 passed 00:04:37.554 00:04:37.554 Run Summary: Type Total Ran Passed Failed Inactive 00:04:37.554 suites 1 1 n/a 0 0 00:04:37.554 tests 2 2 2 0 0 00:04:37.554 asserts 497 497 497 0 n/a 00:04:37.554 00:04:37.554 Elapsed time = 0.961 seconds 00:04:37.554 EAL: Calling mem event callback 'spdk:(nil)' 00:04:37.554 EAL: request: mp_malloc_sync 00:04:37.554 EAL: No shared files mode enabled, IPC is disabled 00:04:37.554 EAL: Heap on socket 0 was shrunk by 2MB 00:04:37.554 EAL: No shared files mode enabled, IPC is disabled 00:04:37.554 EAL: No shared files mode enabled, IPC is disabled 00:04:37.554 EAL: No shared files mode enabled, IPC is disabled 00:04:37.554 00:04:37.554 real 0m1.085s 00:04:37.554 user 0m0.631s 00:04:37.554 sys 0m0.417s 00:04:37.554 20:13:50 env.env_vtophys -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:37.554 20:13:50 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:37.554 ************************************ 00:04:37.554 END TEST env_vtophys 00:04:37.554 ************************************ 00:04:37.554 20:13:50 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/pci/pci_ut 00:04:37.554 20:13:50 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:37.554 20:13:50 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:37.554 20:13:50 env -- common/autotest_common.sh@10 -- # set +x 00:04:37.554 ************************************ 00:04:37.554 START TEST env_pci 00:04:37.554 ************************************ 00:04:37.554 20:13:50 env.env_pci -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/pci/pci_ut 00:04:37.554 00:04:37.554 00:04:37.554 CUnit - A unit testing framework for C - Version 2.1-3 00:04:37.554 http://cunit.sourceforge.net/ 00:04:37.554 00:04:37.554 00:04:37.554 Suite: pci 00:04:37.554 Test: pci_hook ...[2024-05-16 20:13:50.423681] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 2869686 has claimed it 00:04:37.554 EAL: Cannot find device (10000:00:01.0) 00:04:37.554 EAL: Failed to attach device on primary process 00:04:37.554 passed 00:04:37.554 00:04:37.554 Run Summary: Type Total Ran Passed Failed Inactive 00:04:37.554 suites 1 1 n/a 0 0 00:04:37.554 tests 1 1 1 0 0 00:04:37.554 asserts 25 25 25 0 n/a 00:04:37.554 00:04:37.554 Elapsed time = 0.031 seconds 00:04:37.554 00:04:37.554 real 0m0.052s 00:04:37.554 user 0m0.014s 00:04:37.554 sys 0m0.037s 00:04:37.554 20:13:50 env.env_pci -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:37.554 20:13:50 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:37.554 ************************************ 00:04:37.554 END TEST env_pci 00:04:37.554 ************************************ 00:04:37.554 20:13:50 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:37.554 20:13:50 env -- env/env.sh@15 -- # uname 00:04:37.554 20:13:50 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:37.554 20:13:50 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:37.554 20:13:50 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:37.554 20:13:50 env -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:04:37.554 20:13:50 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:37.554 20:13:50 env -- common/autotest_common.sh@10 -- # set +x 00:04:37.554 ************************************ 00:04:37.554 START TEST env_dpdk_post_init 00:04:37.554 ************************************ 00:04:37.554 20:13:50 env.env_dpdk_post_init -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:37.813 EAL: Detected CPU lcores: 96 00:04:37.813 EAL: Detected NUMA nodes: 2 00:04:37.813 EAL: Detected shared linkage of DPDK 00:04:37.813 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:37.813 EAL: Selected IOVA mode 'VA' 00:04:37.813 EAL: No free 2048 kB hugepages reported on node 1 00:04:37.813 EAL: VFIO support initialized 00:04:37.813 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:37.813 EAL: Using IOMMU type 1 (Type 1) 00:04:37.813 EAL: Ignore mapping IO port bar(1) 00:04:37.813 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.0 (socket 0) 00:04:37.813 EAL: Ignore mapping IO port bar(1) 00:04:37.813 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.1 (socket 0) 00:04:37.813 EAL: Ignore mapping IO port bar(1) 00:04:37.813 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.2 (socket 0) 00:04:37.813 EAL: Ignore mapping IO port bar(1) 00:04:37.813 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.3 (socket 0) 00:04:37.813 EAL: Ignore mapping IO port bar(1) 00:04:37.813 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.4 (socket 0) 00:04:37.813 EAL: Ignore mapping IO port bar(1) 00:04:37.813 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.5 (socket 0) 00:04:37.813 EAL: Ignore mapping IO port bar(1) 00:04:37.813 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.6 (socket 0) 00:04:37.813 EAL: Ignore mapping IO port bar(1) 00:04:37.813 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.7 (socket 0) 00:04:38.748 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:5f:00.0 (socket 0) 00:04:38.748 EAL: Ignore mapping IO port bar(1) 00:04:38.748 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.0 (socket 1) 00:04:38.748 EAL: Ignore mapping IO port bar(1) 00:04:38.748 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.1 (socket 1) 00:04:38.748 EAL: Ignore mapping IO port bar(1) 00:04:38.748 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.2 (socket 1) 00:04:38.748 EAL: Ignore mapping IO port bar(1) 00:04:38.748 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.3 (socket 1) 00:04:38.748 EAL: Ignore mapping IO port bar(1) 00:04:38.749 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.4 (socket 1) 00:04:38.749 EAL: Ignore mapping IO port bar(1) 00:04:38.749 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.5 (socket 1) 00:04:38.749 EAL: Ignore mapping IO port bar(1) 00:04:38.749 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.6 (socket 1) 00:04:38.749 EAL: Ignore mapping IO port bar(1) 00:04:38.749 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.7 (socket 1) 00:04:42.947 EAL: Releasing PCI mapped resource for 0000:5f:00.0 00:04:42.947 EAL: Calling pci_unmap_resource for 0000:5f:00.0 at 0x202001020000 00:04:42.947 Starting DPDK initialization... 00:04:42.947 Starting SPDK post initialization... 00:04:42.947 SPDK NVMe probe 00:04:42.947 Attaching to 0000:5f:00.0 00:04:42.947 Attached to 0000:5f:00.0 00:04:42.947 Cleaning up... 00:04:42.947 00:04:42.947 real 0m4.924s 00:04:42.947 user 0m3.830s 00:04:42.947 sys 0m0.164s 00:04:42.947 20:13:55 env.env_dpdk_post_init -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:42.947 20:13:55 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:42.947 ************************************ 00:04:42.947 END TEST env_dpdk_post_init 00:04:42.947 ************************************ 00:04:42.947 20:13:55 env -- env/env.sh@26 -- # uname 00:04:42.947 20:13:55 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:42.947 20:13:55 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:42.947 20:13:55 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:42.947 20:13:55 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:42.947 20:13:55 env -- common/autotest_common.sh@10 -- # set +x 00:04:42.947 ************************************ 00:04:42.947 START TEST env_mem_callbacks 00:04:42.947 ************************************ 00:04:42.947 20:13:55 env.env_mem_callbacks -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:42.947 EAL: Detected CPU lcores: 96 00:04:42.947 EAL: Detected NUMA nodes: 2 00:04:42.947 EAL: Detected shared linkage of DPDK 00:04:42.947 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:42.947 EAL: Selected IOVA mode 'VA' 00:04:42.947 EAL: No free 2048 kB hugepages reported on node 1 00:04:42.947 EAL: VFIO support initialized 00:04:42.947 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:42.947 00:04:42.947 00:04:42.947 CUnit - A unit testing framework for C - Version 2.1-3 00:04:42.947 http://cunit.sourceforge.net/ 00:04:42.947 00:04:42.947 00:04:42.947 Suite: memory 00:04:42.947 Test: test ... 00:04:42.947 register 0x200000200000 2097152 00:04:42.947 malloc 3145728 00:04:42.947 register 0x200000400000 4194304 00:04:42.947 buf 0x200000500000 len 3145728 PASSED 00:04:42.947 malloc 64 00:04:42.947 buf 0x2000004fff40 len 64 PASSED 00:04:42.947 malloc 4194304 00:04:42.947 register 0x200000800000 6291456 00:04:42.947 buf 0x200000a00000 len 4194304 PASSED 00:04:42.947 free 0x200000500000 3145728 00:04:42.947 free 0x2000004fff40 64 00:04:42.947 unregister 0x200000400000 4194304 PASSED 00:04:42.947 free 0x200000a00000 4194304 00:04:42.947 unregister 0x200000800000 6291456 PASSED 00:04:42.947 malloc 8388608 00:04:42.947 register 0x200000400000 10485760 00:04:42.947 buf 0x200000600000 len 8388608 PASSED 00:04:42.947 free 0x200000600000 8388608 00:04:42.947 unregister 0x200000400000 10485760 PASSED 00:04:42.947 passed 00:04:42.947 00:04:42.947 Run Summary: Type Total Ran Passed Failed Inactive 00:04:42.947 suites 1 1 n/a 0 0 00:04:42.947 tests 1 1 1 0 0 00:04:42.947 asserts 15 15 15 0 n/a 00:04:42.947 00:04:42.947 Elapsed time = 0.006 seconds 00:04:42.947 00:04:42.947 real 0m0.057s 00:04:42.947 user 0m0.015s 00:04:42.947 sys 0m0.042s 00:04:42.947 20:13:55 env.env_mem_callbacks -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:42.947 20:13:55 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:42.947 ************************************ 00:04:42.947 END TEST env_mem_callbacks 00:04:42.947 ************************************ 00:04:42.947 00:04:42.947 real 0m6.682s 00:04:42.947 user 0m4.776s 00:04:42.947 sys 0m0.946s 00:04:42.947 20:13:55 env -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:42.947 20:13:55 env -- common/autotest_common.sh@10 -- # set +x 00:04:42.947 ************************************ 00:04:42.947 END TEST env 00:04:42.947 ************************************ 00:04:42.947 20:13:55 -- spdk/autotest.sh@169 -- # run_test rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/rpc.sh 00:04:42.947 20:13:55 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:42.947 20:13:55 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:42.947 20:13:55 -- common/autotest_common.sh@10 -- # set +x 00:04:42.947 ************************************ 00:04:42.947 START TEST rpc 00:04:42.947 ************************************ 00:04:42.947 20:13:55 rpc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/rpc.sh 00:04:42.947 * Looking for test storage... 00:04:42.947 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc 00:04:42.947 20:13:55 rpc -- rpc/rpc.sh@65 -- # spdk_pid=2870723 00:04:42.947 20:13:55 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:42.947 20:13:55 rpc -- rpc/rpc.sh@67 -- # waitforlisten 2870723 00:04:42.947 20:13:55 rpc -- common/autotest_common.sh@827 -- # '[' -z 2870723 ']' 00:04:42.947 20:13:55 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:04:42.947 20:13:55 rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:42.947 20:13:55 rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:04:42.947 20:13:55 rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:42.947 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:42.947 20:13:55 rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:04:42.947 20:13:55 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:42.947 [2024-05-16 20:13:55.829079] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:04:42.947 [2024-05-16 20:13:55.829125] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2870723 ] 00:04:42.947 EAL: No free 2048 kB hugepages reported on node 1 00:04:42.947 [2024-05-16 20:13:55.889833] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:43.206 [2024-05-16 20:13:55.970600] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:43.206 [2024-05-16 20:13:55.970631] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 2870723' to capture a snapshot of events at runtime. 00:04:43.206 [2024-05-16 20:13:55.970638] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:43.206 [2024-05-16 20:13:55.970644] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:43.206 [2024-05-16 20:13:55.970648] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid2870723 for offline analysis/debug. 00:04:43.206 [2024-05-16 20:13:55.970672] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:43.773 20:13:56 rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:04:43.773 20:13:56 rpc -- common/autotest_common.sh@860 -- # return 0 00:04:43.773 20:13:56 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc 00:04:43.773 20:13:56 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc 00:04:43.773 20:13:56 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:43.773 20:13:56 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:43.773 20:13:56 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:43.773 20:13:56 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:43.773 20:13:56 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:43.773 ************************************ 00:04:43.773 START TEST rpc_integrity 00:04:43.773 ************************************ 00:04:43.773 20:13:56 rpc.rpc_integrity -- common/autotest_common.sh@1121 -- # rpc_integrity 00:04:43.773 20:13:56 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:43.773 20:13:56 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:43.773 20:13:56 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:43.773 20:13:56 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:43.773 20:13:56 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:43.773 20:13:56 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:43.773 20:13:56 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:43.773 20:13:56 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:43.773 20:13:56 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:43.773 20:13:56 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:43.773 20:13:56 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:43.773 20:13:56 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:43.773 20:13:56 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:43.773 20:13:56 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:43.773 20:13:56 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:43.773 20:13:56 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:43.773 20:13:56 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:43.773 { 00:04:43.773 "name": "Malloc0", 00:04:43.773 "aliases": [ 00:04:43.773 "31f4e659-ace3-498f-b0ee-3377210ba3a3" 00:04:43.773 ], 00:04:43.773 "product_name": "Malloc disk", 00:04:43.773 "block_size": 512, 00:04:43.773 "num_blocks": 16384, 00:04:43.773 "uuid": "31f4e659-ace3-498f-b0ee-3377210ba3a3", 00:04:43.773 "assigned_rate_limits": { 00:04:43.773 "rw_ios_per_sec": 0, 00:04:43.773 "rw_mbytes_per_sec": 0, 00:04:43.773 "r_mbytes_per_sec": 0, 00:04:43.773 "w_mbytes_per_sec": 0 00:04:43.773 }, 00:04:43.773 "claimed": false, 00:04:43.773 "zoned": false, 00:04:43.773 "supported_io_types": { 00:04:43.773 "read": true, 00:04:43.773 "write": true, 00:04:43.773 "unmap": true, 00:04:43.773 "write_zeroes": true, 00:04:43.773 "flush": true, 00:04:43.773 "reset": true, 00:04:43.773 "compare": false, 00:04:43.773 "compare_and_write": false, 00:04:43.773 "abort": true, 00:04:43.773 "nvme_admin": false, 00:04:43.773 "nvme_io": false 00:04:43.773 }, 00:04:43.773 "memory_domains": [ 00:04:43.773 { 00:04:43.773 "dma_device_id": "system", 00:04:43.773 "dma_device_type": 1 00:04:43.773 }, 00:04:43.773 { 00:04:43.773 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:43.773 "dma_device_type": 2 00:04:43.773 } 00:04:43.773 ], 00:04:43.773 "driver_specific": {} 00:04:43.773 } 00:04:43.773 ]' 00:04:43.773 20:13:56 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:44.032 20:13:56 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:44.032 20:13:56 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:44.032 20:13:56 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:44.032 20:13:56 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:44.032 [2024-05-16 20:13:56.785381] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:44.032 [2024-05-16 20:13:56.785409] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:44.032 [2024-05-16 20:13:56.785420] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1b87950 00:04:44.032 [2024-05-16 20:13:56.785430] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:44.032 [2024-05-16 20:13:56.786517] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:44.032 [2024-05-16 20:13:56.786537] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:44.032 Passthru0 00:04:44.032 20:13:56 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:44.032 20:13:56 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:44.032 20:13:56 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:44.032 20:13:56 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:44.032 20:13:56 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:44.032 20:13:56 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:44.032 { 00:04:44.032 "name": "Malloc0", 00:04:44.032 "aliases": [ 00:04:44.032 "31f4e659-ace3-498f-b0ee-3377210ba3a3" 00:04:44.032 ], 00:04:44.032 "product_name": "Malloc disk", 00:04:44.032 "block_size": 512, 00:04:44.032 "num_blocks": 16384, 00:04:44.032 "uuid": "31f4e659-ace3-498f-b0ee-3377210ba3a3", 00:04:44.032 "assigned_rate_limits": { 00:04:44.032 "rw_ios_per_sec": 0, 00:04:44.032 "rw_mbytes_per_sec": 0, 00:04:44.032 "r_mbytes_per_sec": 0, 00:04:44.032 "w_mbytes_per_sec": 0 00:04:44.032 }, 00:04:44.032 "claimed": true, 00:04:44.032 "claim_type": "exclusive_write", 00:04:44.032 "zoned": false, 00:04:44.032 "supported_io_types": { 00:04:44.032 "read": true, 00:04:44.032 "write": true, 00:04:44.032 "unmap": true, 00:04:44.032 "write_zeroes": true, 00:04:44.032 "flush": true, 00:04:44.032 "reset": true, 00:04:44.032 "compare": false, 00:04:44.032 "compare_and_write": false, 00:04:44.032 "abort": true, 00:04:44.032 "nvme_admin": false, 00:04:44.032 "nvme_io": false 00:04:44.032 }, 00:04:44.032 "memory_domains": [ 00:04:44.032 { 00:04:44.032 "dma_device_id": "system", 00:04:44.032 "dma_device_type": 1 00:04:44.032 }, 00:04:44.032 { 00:04:44.032 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:44.032 "dma_device_type": 2 00:04:44.032 } 00:04:44.032 ], 00:04:44.032 "driver_specific": {} 00:04:44.032 }, 00:04:44.032 { 00:04:44.032 "name": "Passthru0", 00:04:44.032 "aliases": [ 00:04:44.032 "71567972-405a-5548-a10f-d0d0ac6c0a92" 00:04:44.032 ], 00:04:44.032 "product_name": "passthru", 00:04:44.032 "block_size": 512, 00:04:44.032 "num_blocks": 16384, 00:04:44.032 "uuid": "71567972-405a-5548-a10f-d0d0ac6c0a92", 00:04:44.032 "assigned_rate_limits": { 00:04:44.032 "rw_ios_per_sec": 0, 00:04:44.032 "rw_mbytes_per_sec": 0, 00:04:44.032 "r_mbytes_per_sec": 0, 00:04:44.032 "w_mbytes_per_sec": 0 00:04:44.032 }, 00:04:44.032 "claimed": false, 00:04:44.032 "zoned": false, 00:04:44.032 "supported_io_types": { 00:04:44.032 "read": true, 00:04:44.032 "write": true, 00:04:44.032 "unmap": true, 00:04:44.032 "write_zeroes": true, 00:04:44.032 "flush": true, 00:04:44.032 "reset": true, 00:04:44.032 "compare": false, 00:04:44.032 "compare_and_write": false, 00:04:44.032 "abort": true, 00:04:44.032 "nvme_admin": false, 00:04:44.032 "nvme_io": false 00:04:44.032 }, 00:04:44.032 "memory_domains": [ 00:04:44.032 { 00:04:44.032 "dma_device_id": "system", 00:04:44.032 "dma_device_type": 1 00:04:44.032 }, 00:04:44.032 { 00:04:44.032 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:44.032 "dma_device_type": 2 00:04:44.032 } 00:04:44.032 ], 00:04:44.032 "driver_specific": { 00:04:44.032 "passthru": { 00:04:44.032 "name": "Passthru0", 00:04:44.032 "base_bdev_name": "Malloc0" 00:04:44.032 } 00:04:44.032 } 00:04:44.032 } 00:04:44.032 ]' 00:04:44.032 20:13:56 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:44.032 20:13:56 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:44.032 20:13:56 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:44.032 20:13:56 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:44.032 20:13:56 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:44.032 20:13:56 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:44.032 20:13:56 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:44.032 20:13:56 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:44.032 20:13:56 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:44.032 20:13:56 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:44.032 20:13:56 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:44.032 20:13:56 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:44.032 20:13:56 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:44.032 20:13:56 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:44.032 20:13:56 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:44.032 20:13:56 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:44.032 20:13:56 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:44.032 00:04:44.032 real 0m0.247s 00:04:44.032 user 0m0.156s 00:04:44.032 sys 0m0.029s 00:04:44.032 20:13:56 rpc.rpc_integrity -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:44.032 20:13:56 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:44.032 ************************************ 00:04:44.032 END TEST rpc_integrity 00:04:44.032 ************************************ 00:04:44.032 20:13:56 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:44.032 20:13:56 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:44.032 20:13:56 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:44.032 20:13:56 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:44.032 ************************************ 00:04:44.032 START TEST rpc_plugins 00:04:44.032 ************************************ 00:04:44.032 20:13:56 rpc.rpc_plugins -- common/autotest_common.sh@1121 -- # rpc_plugins 00:04:44.032 20:13:56 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:44.032 20:13:56 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:44.032 20:13:56 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:44.032 20:13:56 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:44.032 20:13:56 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:44.032 20:13:56 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:44.032 20:13:56 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:44.032 20:13:56 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:44.032 20:13:57 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:44.032 20:13:57 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:44.032 { 00:04:44.032 "name": "Malloc1", 00:04:44.032 "aliases": [ 00:04:44.032 "39c937b2-3939-4f89-a93f-8b676bc77671" 00:04:44.032 ], 00:04:44.032 "product_name": "Malloc disk", 00:04:44.032 "block_size": 4096, 00:04:44.032 "num_blocks": 256, 00:04:44.032 "uuid": "39c937b2-3939-4f89-a93f-8b676bc77671", 00:04:44.032 "assigned_rate_limits": { 00:04:44.032 "rw_ios_per_sec": 0, 00:04:44.032 "rw_mbytes_per_sec": 0, 00:04:44.032 "r_mbytes_per_sec": 0, 00:04:44.032 "w_mbytes_per_sec": 0 00:04:44.032 }, 00:04:44.032 "claimed": false, 00:04:44.032 "zoned": false, 00:04:44.032 "supported_io_types": { 00:04:44.032 "read": true, 00:04:44.032 "write": true, 00:04:44.032 "unmap": true, 00:04:44.032 "write_zeroes": true, 00:04:44.032 "flush": true, 00:04:44.032 "reset": true, 00:04:44.032 "compare": false, 00:04:44.032 "compare_and_write": false, 00:04:44.032 "abort": true, 00:04:44.032 "nvme_admin": false, 00:04:44.032 "nvme_io": false 00:04:44.032 }, 00:04:44.032 "memory_domains": [ 00:04:44.032 { 00:04:44.032 "dma_device_id": "system", 00:04:44.032 "dma_device_type": 1 00:04:44.032 }, 00:04:44.033 { 00:04:44.033 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:44.033 "dma_device_type": 2 00:04:44.033 } 00:04:44.033 ], 00:04:44.033 "driver_specific": {} 00:04:44.033 } 00:04:44.033 ]' 00:04:44.033 20:13:57 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:44.292 20:13:57 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:44.292 20:13:57 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:44.292 20:13:57 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:44.292 20:13:57 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:44.292 20:13:57 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:44.292 20:13:57 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:44.292 20:13:57 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:44.292 20:13:57 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:44.292 20:13:57 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:44.292 20:13:57 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:44.292 20:13:57 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:44.292 20:13:57 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:44.292 00:04:44.292 real 0m0.128s 00:04:44.292 user 0m0.081s 00:04:44.292 sys 0m0.014s 00:04:44.292 20:13:57 rpc.rpc_plugins -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:44.292 20:13:57 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:44.292 ************************************ 00:04:44.292 END TEST rpc_plugins 00:04:44.292 ************************************ 00:04:44.292 20:13:57 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:44.292 20:13:57 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:44.292 20:13:57 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:44.292 20:13:57 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:44.292 ************************************ 00:04:44.292 START TEST rpc_trace_cmd_test 00:04:44.292 ************************************ 00:04:44.292 20:13:57 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1121 -- # rpc_trace_cmd_test 00:04:44.292 20:13:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:44.292 20:13:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:44.292 20:13:57 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:44.292 20:13:57 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:44.292 20:13:57 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:44.292 20:13:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:44.292 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid2870723", 00:04:44.292 "tpoint_group_mask": "0x8", 00:04:44.292 "iscsi_conn": { 00:04:44.292 "mask": "0x2", 00:04:44.292 "tpoint_mask": "0x0" 00:04:44.292 }, 00:04:44.292 "scsi": { 00:04:44.292 "mask": "0x4", 00:04:44.292 "tpoint_mask": "0x0" 00:04:44.292 }, 00:04:44.292 "bdev": { 00:04:44.292 "mask": "0x8", 00:04:44.292 "tpoint_mask": "0xffffffffffffffff" 00:04:44.292 }, 00:04:44.292 "nvmf_rdma": { 00:04:44.292 "mask": "0x10", 00:04:44.292 "tpoint_mask": "0x0" 00:04:44.292 }, 00:04:44.292 "nvmf_tcp": { 00:04:44.292 "mask": "0x20", 00:04:44.292 "tpoint_mask": "0x0" 00:04:44.292 }, 00:04:44.292 "ftl": { 00:04:44.292 "mask": "0x40", 00:04:44.292 "tpoint_mask": "0x0" 00:04:44.292 }, 00:04:44.292 "blobfs": { 00:04:44.292 "mask": "0x80", 00:04:44.292 "tpoint_mask": "0x0" 00:04:44.292 }, 00:04:44.292 "dsa": { 00:04:44.292 "mask": "0x200", 00:04:44.292 "tpoint_mask": "0x0" 00:04:44.292 }, 00:04:44.292 "thread": { 00:04:44.292 "mask": "0x400", 00:04:44.292 "tpoint_mask": "0x0" 00:04:44.292 }, 00:04:44.292 "nvme_pcie": { 00:04:44.292 "mask": "0x800", 00:04:44.292 "tpoint_mask": "0x0" 00:04:44.292 }, 00:04:44.292 "iaa": { 00:04:44.292 "mask": "0x1000", 00:04:44.292 "tpoint_mask": "0x0" 00:04:44.292 }, 00:04:44.292 "nvme_tcp": { 00:04:44.292 "mask": "0x2000", 00:04:44.292 "tpoint_mask": "0x0" 00:04:44.292 }, 00:04:44.292 "bdev_nvme": { 00:04:44.292 "mask": "0x4000", 00:04:44.292 "tpoint_mask": "0x0" 00:04:44.292 }, 00:04:44.292 "sock": { 00:04:44.292 "mask": "0x8000", 00:04:44.292 "tpoint_mask": "0x0" 00:04:44.292 } 00:04:44.292 }' 00:04:44.292 20:13:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:44.292 20:13:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:04:44.292 20:13:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:44.292 20:13:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:44.292 20:13:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:44.551 20:13:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:44.551 20:13:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:44.551 20:13:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:44.551 20:13:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:44.551 20:13:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:44.551 00:04:44.551 real 0m0.229s 00:04:44.551 user 0m0.192s 00:04:44.551 sys 0m0.025s 00:04:44.551 20:13:57 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:44.551 20:13:57 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:44.551 ************************************ 00:04:44.551 END TEST rpc_trace_cmd_test 00:04:44.551 ************************************ 00:04:44.551 20:13:57 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:44.551 20:13:57 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:44.551 20:13:57 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:44.551 20:13:57 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:44.551 20:13:57 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:44.551 20:13:57 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:44.551 ************************************ 00:04:44.551 START TEST rpc_daemon_integrity 00:04:44.551 ************************************ 00:04:44.551 20:13:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1121 -- # rpc_integrity 00:04:44.551 20:13:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:44.551 20:13:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:44.551 20:13:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:44.551 20:13:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:44.551 20:13:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:44.551 20:13:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:44.552 20:13:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:44.552 20:13:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:44.552 20:13:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:44.552 20:13:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:44.552 20:13:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:44.552 20:13:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:44.552 20:13:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:44.552 20:13:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:44.552 20:13:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:44.810 20:13:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:44.810 20:13:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:44.810 { 00:04:44.810 "name": "Malloc2", 00:04:44.810 "aliases": [ 00:04:44.810 "7b691ce1-314d-4400-8374-fdb8dd4b9533" 00:04:44.810 ], 00:04:44.810 "product_name": "Malloc disk", 00:04:44.810 "block_size": 512, 00:04:44.810 "num_blocks": 16384, 00:04:44.810 "uuid": "7b691ce1-314d-4400-8374-fdb8dd4b9533", 00:04:44.810 "assigned_rate_limits": { 00:04:44.810 "rw_ios_per_sec": 0, 00:04:44.810 "rw_mbytes_per_sec": 0, 00:04:44.810 "r_mbytes_per_sec": 0, 00:04:44.810 "w_mbytes_per_sec": 0 00:04:44.810 }, 00:04:44.810 "claimed": false, 00:04:44.810 "zoned": false, 00:04:44.810 "supported_io_types": { 00:04:44.810 "read": true, 00:04:44.810 "write": true, 00:04:44.810 "unmap": true, 00:04:44.810 "write_zeroes": true, 00:04:44.810 "flush": true, 00:04:44.810 "reset": true, 00:04:44.810 "compare": false, 00:04:44.810 "compare_and_write": false, 00:04:44.810 "abort": true, 00:04:44.810 "nvme_admin": false, 00:04:44.810 "nvme_io": false 00:04:44.810 }, 00:04:44.810 "memory_domains": [ 00:04:44.810 { 00:04:44.810 "dma_device_id": "system", 00:04:44.810 "dma_device_type": 1 00:04:44.810 }, 00:04:44.810 { 00:04:44.810 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:44.810 "dma_device_type": 2 00:04:44.810 } 00:04:44.810 ], 00:04:44.810 "driver_specific": {} 00:04:44.810 } 00:04:44.810 ]' 00:04:44.810 20:13:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:44.810 20:13:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:44.810 20:13:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:44.810 20:13:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:44.810 20:13:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:44.810 [2024-05-16 20:13:57.595592] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:44.810 [2024-05-16 20:13:57.595617] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:44.810 [2024-05-16 20:13:57.595630] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1b89190 00:04:44.810 [2024-05-16 20:13:57.595636] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:44.810 [2024-05-16 20:13:57.596578] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:44.810 [2024-05-16 20:13:57.596597] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:44.810 Passthru0 00:04:44.810 20:13:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:44.810 20:13:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:44.810 20:13:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:44.810 20:13:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:44.810 20:13:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:44.810 20:13:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:44.810 { 00:04:44.810 "name": "Malloc2", 00:04:44.810 "aliases": [ 00:04:44.810 "7b691ce1-314d-4400-8374-fdb8dd4b9533" 00:04:44.810 ], 00:04:44.810 "product_name": "Malloc disk", 00:04:44.810 "block_size": 512, 00:04:44.810 "num_blocks": 16384, 00:04:44.810 "uuid": "7b691ce1-314d-4400-8374-fdb8dd4b9533", 00:04:44.810 "assigned_rate_limits": { 00:04:44.810 "rw_ios_per_sec": 0, 00:04:44.810 "rw_mbytes_per_sec": 0, 00:04:44.810 "r_mbytes_per_sec": 0, 00:04:44.810 "w_mbytes_per_sec": 0 00:04:44.810 }, 00:04:44.810 "claimed": true, 00:04:44.810 "claim_type": "exclusive_write", 00:04:44.810 "zoned": false, 00:04:44.810 "supported_io_types": { 00:04:44.810 "read": true, 00:04:44.810 "write": true, 00:04:44.810 "unmap": true, 00:04:44.810 "write_zeroes": true, 00:04:44.810 "flush": true, 00:04:44.810 "reset": true, 00:04:44.810 "compare": false, 00:04:44.810 "compare_and_write": false, 00:04:44.810 "abort": true, 00:04:44.810 "nvme_admin": false, 00:04:44.810 "nvme_io": false 00:04:44.810 }, 00:04:44.810 "memory_domains": [ 00:04:44.810 { 00:04:44.810 "dma_device_id": "system", 00:04:44.810 "dma_device_type": 1 00:04:44.810 }, 00:04:44.810 { 00:04:44.810 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:44.810 "dma_device_type": 2 00:04:44.810 } 00:04:44.810 ], 00:04:44.810 "driver_specific": {} 00:04:44.810 }, 00:04:44.810 { 00:04:44.810 "name": "Passthru0", 00:04:44.810 "aliases": [ 00:04:44.810 "8ea16343-ed8e-5923-8142-9e970e0a10cb" 00:04:44.810 ], 00:04:44.810 "product_name": "passthru", 00:04:44.810 "block_size": 512, 00:04:44.810 "num_blocks": 16384, 00:04:44.810 "uuid": "8ea16343-ed8e-5923-8142-9e970e0a10cb", 00:04:44.810 "assigned_rate_limits": { 00:04:44.810 "rw_ios_per_sec": 0, 00:04:44.810 "rw_mbytes_per_sec": 0, 00:04:44.811 "r_mbytes_per_sec": 0, 00:04:44.811 "w_mbytes_per_sec": 0 00:04:44.811 }, 00:04:44.811 "claimed": false, 00:04:44.811 "zoned": false, 00:04:44.811 "supported_io_types": { 00:04:44.811 "read": true, 00:04:44.811 "write": true, 00:04:44.811 "unmap": true, 00:04:44.811 "write_zeroes": true, 00:04:44.811 "flush": true, 00:04:44.811 "reset": true, 00:04:44.811 "compare": false, 00:04:44.811 "compare_and_write": false, 00:04:44.811 "abort": true, 00:04:44.811 "nvme_admin": false, 00:04:44.811 "nvme_io": false 00:04:44.811 }, 00:04:44.811 "memory_domains": [ 00:04:44.811 { 00:04:44.811 "dma_device_id": "system", 00:04:44.811 "dma_device_type": 1 00:04:44.811 }, 00:04:44.811 { 00:04:44.811 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:44.811 "dma_device_type": 2 00:04:44.811 } 00:04:44.811 ], 00:04:44.811 "driver_specific": { 00:04:44.811 "passthru": { 00:04:44.811 "name": "Passthru0", 00:04:44.811 "base_bdev_name": "Malloc2" 00:04:44.811 } 00:04:44.811 } 00:04:44.811 } 00:04:44.811 ]' 00:04:44.811 20:13:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:44.811 20:13:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:44.811 20:13:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:44.811 20:13:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:44.811 20:13:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:44.811 20:13:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:44.811 20:13:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:44.811 20:13:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:44.811 20:13:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:44.811 20:13:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:44.811 20:13:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:44.811 20:13:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:44.811 20:13:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:44.811 20:13:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:44.811 20:13:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:44.811 20:13:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:44.811 20:13:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:44.811 00:04:44.811 real 0m0.246s 00:04:44.811 user 0m0.158s 00:04:44.811 sys 0m0.030s 00:04:44.811 20:13:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:44.811 20:13:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:44.811 ************************************ 00:04:44.811 END TEST rpc_daemon_integrity 00:04:44.811 ************************************ 00:04:44.811 20:13:57 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:44.811 20:13:57 rpc -- rpc/rpc.sh@84 -- # killprocess 2870723 00:04:44.811 20:13:57 rpc -- common/autotest_common.sh@946 -- # '[' -z 2870723 ']' 00:04:44.811 20:13:57 rpc -- common/autotest_common.sh@950 -- # kill -0 2870723 00:04:44.811 20:13:57 rpc -- common/autotest_common.sh@951 -- # uname 00:04:44.811 20:13:57 rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:04:44.811 20:13:57 rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2870723 00:04:44.811 20:13:57 rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:04:44.811 20:13:57 rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:04:44.811 20:13:57 rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2870723' 00:04:44.811 killing process with pid 2870723 00:04:44.811 20:13:57 rpc -- common/autotest_common.sh@965 -- # kill 2870723 00:04:44.811 20:13:57 rpc -- common/autotest_common.sh@970 -- # wait 2870723 00:04:45.377 00:04:45.377 real 0m2.398s 00:04:45.377 user 0m3.077s 00:04:45.377 sys 0m0.648s 00:04:45.377 20:13:58 rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:45.377 20:13:58 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:45.377 ************************************ 00:04:45.377 END TEST rpc 00:04:45.377 ************************************ 00:04:45.377 20:13:58 -- spdk/autotest.sh@170 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:45.377 20:13:58 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:45.377 20:13:58 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:45.377 20:13:58 -- common/autotest_common.sh@10 -- # set +x 00:04:45.377 ************************************ 00:04:45.377 START TEST skip_rpc 00:04:45.377 ************************************ 00:04:45.377 20:13:58 skip_rpc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:45.377 * Looking for test storage... 00:04:45.377 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc 00:04:45.377 20:13:58 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/config.json 00:04:45.377 20:13:58 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/log.txt 00:04:45.377 20:13:58 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:45.377 20:13:58 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:45.377 20:13:58 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:45.377 20:13:58 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:45.377 ************************************ 00:04:45.377 START TEST skip_rpc 00:04:45.377 ************************************ 00:04:45.377 20:13:58 skip_rpc.skip_rpc -- common/autotest_common.sh@1121 -- # test_skip_rpc 00:04:45.377 20:13:58 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=2871352 00:04:45.377 20:13:58 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:45.377 20:13:58 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:45.377 20:13:58 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:45.377 [2024-05-16 20:13:58.327924] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:04:45.377 [2024-05-16 20:13:58.327963] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2871352 ] 00:04:45.377 EAL: No free 2048 kB hugepages reported on node 1 00:04:45.635 [2024-05-16 20:13:58.386527] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:45.635 [2024-05-16 20:13:58.458576] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:50.907 20:14:03 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:50.907 20:14:03 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:04:50.907 20:14:03 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:50.907 20:14:03 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:04:50.907 20:14:03 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:50.907 20:14:03 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:04:50.907 20:14:03 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:50.907 20:14:03 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:04:50.907 20:14:03 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:50.907 20:14:03 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:50.907 20:14:03 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:04:50.907 20:14:03 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:04:50.907 20:14:03 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:50.907 20:14:03 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:04:50.907 20:14:03 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:50.907 20:14:03 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:50.907 20:14:03 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 2871352 00:04:50.907 20:14:03 skip_rpc.skip_rpc -- common/autotest_common.sh@946 -- # '[' -z 2871352 ']' 00:04:50.907 20:14:03 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # kill -0 2871352 00:04:50.907 20:14:03 skip_rpc.skip_rpc -- common/autotest_common.sh@951 -- # uname 00:04:50.907 20:14:03 skip_rpc.skip_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:04:50.907 20:14:03 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2871352 00:04:50.907 20:14:03 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:04:50.907 20:14:03 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:04:50.907 20:14:03 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2871352' 00:04:50.907 killing process with pid 2871352 00:04:50.907 20:14:03 skip_rpc.skip_rpc -- common/autotest_common.sh@965 -- # kill 2871352 00:04:50.907 20:14:03 skip_rpc.skip_rpc -- common/autotest_common.sh@970 -- # wait 2871352 00:04:50.907 00:04:50.907 real 0m5.365s 00:04:50.907 user 0m5.131s 00:04:50.907 sys 0m0.266s 00:04:50.907 20:14:03 skip_rpc.skip_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:50.907 20:14:03 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:50.907 ************************************ 00:04:50.907 END TEST skip_rpc 00:04:50.907 ************************************ 00:04:50.907 20:14:03 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:50.907 20:14:03 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:50.907 20:14:03 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:50.907 20:14:03 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:50.907 ************************************ 00:04:50.907 START TEST skip_rpc_with_json 00:04:50.907 ************************************ 00:04:50.907 20:14:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1121 -- # test_skip_rpc_with_json 00:04:50.907 20:14:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:50.907 20:14:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=2872303 00:04:50.907 20:14:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:50.907 20:14:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:50.907 20:14:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 2872303 00:04:50.907 20:14:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@827 -- # '[' -z 2872303 ']' 00:04:50.907 20:14:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:50.907 20:14:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@832 -- # local max_retries=100 00:04:50.907 20:14:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:50.907 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:50.907 20:14:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # xtrace_disable 00:04:50.907 20:14:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:50.907 [2024-05-16 20:14:03.766211] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:04:50.907 [2024-05-16 20:14:03.766257] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2872303 ] 00:04:50.907 EAL: No free 2048 kB hugepages reported on node 1 00:04:50.907 [2024-05-16 20:14:03.827823] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:51.165 [2024-05-16 20:14:03.899883] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:51.731 20:14:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:04:51.731 20:14:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # return 0 00:04:51.731 20:14:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:51.731 20:14:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:51.731 20:14:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:51.731 [2024-05-16 20:14:04.571430] nvmf_rpc.c:2548:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:51.731 request: 00:04:51.731 { 00:04:51.731 "trtype": "tcp", 00:04:51.731 "method": "nvmf_get_transports", 00:04:51.731 "req_id": 1 00:04:51.731 } 00:04:51.731 Got JSON-RPC error response 00:04:51.731 response: 00:04:51.731 { 00:04:51.731 "code": -19, 00:04:51.731 "message": "No such device" 00:04:51.731 } 00:04:51.731 20:14:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:04:51.731 20:14:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:51.731 20:14:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:51.731 20:14:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:51.731 [2024-05-16 20:14:04.583529] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:51.731 20:14:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:51.731 20:14:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:51.731 20:14:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:51.731 20:14:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:51.990 20:14:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:51.990 20:14:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/config.json 00:04:51.990 { 00:04:51.990 "subsystems": [ 00:04:51.990 { 00:04:51.990 "subsystem": "keyring", 00:04:51.990 "config": [] 00:04:51.990 }, 00:04:51.990 { 00:04:51.990 "subsystem": "iobuf", 00:04:51.990 "config": [ 00:04:51.990 { 00:04:51.990 "method": "iobuf_set_options", 00:04:51.990 "params": { 00:04:51.990 "small_pool_count": 8192, 00:04:51.990 "large_pool_count": 1024, 00:04:51.990 "small_bufsize": 8192, 00:04:51.990 "large_bufsize": 135168 00:04:51.990 } 00:04:51.990 } 00:04:51.990 ] 00:04:51.990 }, 00:04:51.990 { 00:04:51.990 "subsystem": "sock", 00:04:51.990 "config": [ 00:04:51.990 { 00:04:51.990 "method": "sock_set_default_impl", 00:04:51.990 "params": { 00:04:51.990 "impl_name": "posix" 00:04:51.990 } 00:04:51.990 }, 00:04:51.990 { 00:04:51.990 "method": "sock_impl_set_options", 00:04:51.990 "params": { 00:04:51.990 "impl_name": "ssl", 00:04:51.990 "recv_buf_size": 4096, 00:04:51.990 "send_buf_size": 4096, 00:04:51.990 "enable_recv_pipe": true, 00:04:51.990 "enable_quickack": false, 00:04:51.990 "enable_placement_id": 0, 00:04:51.990 "enable_zerocopy_send_server": true, 00:04:51.990 "enable_zerocopy_send_client": false, 00:04:51.990 "zerocopy_threshold": 0, 00:04:51.990 "tls_version": 0, 00:04:51.990 "enable_ktls": false 00:04:51.990 } 00:04:51.990 }, 00:04:51.990 { 00:04:51.990 "method": "sock_impl_set_options", 00:04:51.990 "params": { 00:04:51.990 "impl_name": "posix", 00:04:51.990 "recv_buf_size": 2097152, 00:04:51.990 "send_buf_size": 2097152, 00:04:51.990 "enable_recv_pipe": true, 00:04:51.990 "enable_quickack": false, 00:04:51.990 "enable_placement_id": 0, 00:04:51.990 "enable_zerocopy_send_server": true, 00:04:51.990 "enable_zerocopy_send_client": false, 00:04:51.990 "zerocopy_threshold": 0, 00:04:51.990 "tls_version": 0, 00:04:51.990 "enable_ktls": false 00:04:51.990 } 00:04:51.990 } 00:04:51.990 ] 00:04:51.990 }, 00:04:51.990 { 00:04:51.990 "subsystem": "vmd", 00:04:51.990 "config": [] 00:04:51.990 }, 00:04:51.990 { 00:04:51.990 "subsystem": "accel", 00:04:51.990 "config": [ 00:04:51.990 { 00:04:51.990 "method": "accel_set_options", 00:04:51.990 "params": { 00:04:51.990 "small_cache_size": 128, 00:04:51.990 "large_cache_size": 16, 00:04:51.990 "task_count": 2048, 00:04:51.990 "sequence_count": 2048, 00:04:51.990 "buf_count": 2048 00:04:51.990 } 00:04:51.990 } 00:04:51.990 ] 00:04:51.990 }, 00:04:51.990 { 00:04:51.990 "subsystem": "bdev", 00:04:51.990 "config": [ 00:04:51.990 { 00:04:51.990 "method": "bdev_set_options", 00:04:51.990 "params": { 00:04:51.990 "bdev_io_pool_size": 65535, 00:04:51.990 "bdev_io_cache_size": 256, 00:04:51.990 "bdev_auto_examine": true, 00:04:51.990 "iobuf_small_cache_size": 128, 00:04:51.990 "iobuf_large_cache_size": 16 00:04:51.990 } 00:04:51.990 }, 00:04:51.990 { 00:04:51.990 "method": "bdev_raid_set_options", 00:04:51.990 "params": { 00:04:51.990 "process_window_size_kb": 1024 00:04:51.990 } 00:04:51.990 }, 00:04:51.990 { 00:04:51.990 "method": "bdev_iscsi_set_options", 00:04:51.990 "params": { 00:04:51.990 "timeout_sec": 30 00:04:51.990 } 00:04:51.990 }, 00:04:51.990 { 00:04:51.990 "method": "bdev_nvme_set_options", 00:04:51.990 "params": { 00:04:51.990 "action_on_timeout": "none", 00:04:51.990 "timeout_us": 0, 00:04:51.990 "timeout_admin_us": 0, 00:04:51.990 "keep_alive_timeout_ms": 10000, 00:04:51.990 "arbitration_burst": 0, 00:04:51.990 "low_priority_weight": 0, 00:04:51.990 "medium_priority_weight": 0, 00:04:51.990 "high_priority_weight": 0, 00:04:51.990 "nvme_adminq_poll_period_us": 10000, 00:04:51.990 "nvme_ioq_poll_period_us": 0, 00:04:51.990 "io_queue_requests": 0, 00:04:51.990 "delay_cmd_submit": true, 00:04:51.990 "transport_retry_count": 4, 00:04:51.990 "bdev_retry_count": 3, 00:04:51.990 "transport_ack_timeout": 0, 00:04:51.990 "ctrlr_loss_timeout_sec": 0, 00:04:51.990 "reconnect_delay_sec": 0, 00:04:51.990 "fast_io_fail_timeout_sec": 0, 00:04:51.990 "disable_auto_failback": false, 00:04:51.990 "generate_uuids": false, 00:04:51.990 "transport_tos": 0, 00:04:51.990 "nvme_error_stat": false, 00:04:51.990 "rdma_srq_size": 0, 00:04:51.990 "io_path_stat": false, 00:04:51.990 "allow_accel_sequence": false, 00:04:51.990 "rdma_max_cq_size": 0, 00:04:51.990 "rdma_cm_event_timeout_ms": 0, 00:04:51.990 "dhchap_digests": [ 00:04:51.990 "sha256", 00:04:51.990 "sha384", 00:04:51.990 "sha512" 00:04:51.990 ], 00:04:51.990 "dhchap_dhgroups": [ 00:04:51.990 "null", 00:04:51.990 "ffdhe2048", 00:04:51.990 "ffdhe3072", 00:04:51.990 "ffdhe4096", 00:04:51.990 "ffdhe6144", 00:04:51.990 "ffdhe8192" 00:04:51.990 ] 00:04:51.990 } 00:04:51.990 }, 00:04:51.990 { 00:04:51.990 "method": "bdev_nvme_set_hotplug", 00:04:51.990 "params": { 00:04:51.990 "period_us": 100000, 00:04:51.990 "enable": false 00:04:51.990 } 00:04:51.990 }, 00:04:51.990 { 00:04:51.990 "method": "bdev_wait_for_examine" 00:04:51.990 } 00:04:51.990 ] 00:04:51.990 }, 00:04:51.990 { 00:04:51.990 "subsystem": "scsi", 00:04:51.990 "config": null 00:04:51.990 }, 00:04:51.990 { 00:04:51.990 "subsystem": "scheduler", 00:04:51.990 "config": [ 00:04:51.990 { 00:04:51.990 "method": "framework_set_scheduler", 00:04:51.990 "params": { 00:04:51.990 "name": "static" 00:04:51.990 } 00:04:51.990 } 00:04:51.990 ] 00:04:51.990 }, 00:04:51.990 { 00:04:51.990 "subsystem": "vhost_scsi", 00:04:51.990 "config": [] 00:04:51.990 }, 00:04:51.990 { 00:04:51.990 "subsystem": "vhost_blk", 00:04:51.990 "config": [] 00:04:51.990 }, 00:04:51.990 { 00:04:51.990 "subsystem": "ublk", 00:04:51.990 "config": [] 00:04:51.990 }, 00:04:51.990 { 00:04:51.990 "subsystem": "nbd", 00:04:51.990 "config": [] 00:04:51.990 }, 00:04:51.990 { 00:04:51.990 "subsystem": "nvmf", 00:04:51.990 "config": [ 00:04:51.990 { 00:04:51.990 "method": "nvmf_set_config", 00:04:51.990 "params": { 00:04:51.990 "discovery_filter": "match_any", 00:04:51.990 "admin_cmd_passthru": { 00:04:51.990 "identify_ctrlr": false 00:04:51.990 } 00:04:51.990 } 00:04:51.990 }, 00:04:51.990 { 00:04:51.990 "method": "nvmf_set_max_subsystems", 00:04:51.990 "params": { 00:04:51.990 "max_subsystems": 1024 00:04:51.990 } 00:04:51.990 }, 00:04:51.990 { 00:04:51.990 "method": "nvmf_set_crdt", 00:04:51.990 "params": { 00:04:51.990 "crdt1": 0, 00:04:51.990 "crdt2": 0, 00:04:51.990 "crdt3": 0 00:04:51.990 } 00:04:51.990 }, 00:04:51.990 { 00:04:51.990 "method": "nvmf_create_transport", 00:04:51.990 "params": { 00:04:51.990 "trtype": "TCP", 00:04:51.990 "max_queue_depth": 128, 00:04:51.990 "max_io_qpairs_per_ctrlr": 127, 00:04:51.990 "in_capsule_data_size": 4096, 00:04:51.990 "max_io_size": 131072, 00:04:51.990 "io_unit_size": 131072, 00:04:51.990 "max_aq_depth": 128, 00:04:51.990 "num_shared_buffers": 511, 00:04:51.990 "buf_cache_size": 4294967295, 00:04:51.990 "dif_insert_or_strip": false, 00:04:51.990 "zcopy": false, 00:04:51.990 "c2h_success": true, 00:04:51.990 "sock_priority": 0, 00:04:51.990 "abort_timeout_sec": 1, 00:04:51.990 "ack_timeout": 0, 00:04:51.990 "data_wr_pool_size": 0 00:04:51.990 } 00:04:51.990 } 00:04:51.990 ] 00:04:51.990 }, 00:04:51.990 { 00:04:51.990 "subsystem": "iscsi", 00:04:51.990 "config": [ 00:04:51.990 { 00:04:51.990 "method": "iscsi_set_options", 00:04:51.990 "params": { 00:04:51.990 "node_base": "iqn.2016-06.io.spdk", 00:04:51.990 "max_sessions": 128, 00:04:51.990 "max_connections_per_session": 2, 00:04:51.990 "max_queue_depth": 64, 00:04:51.990 "default_time2wait": 2, 00:04:51.990 "default_time2retain": 20, 00:04:51.990 "first_burst_length": 8192, 00:04:51.990 "immediate_data": true, 00:04:51.990 "allow_duplicated_isid": false, 00:04:51.990 "error_recovery_level": 0, 00:04:51.990 "nop_timeout": 60, 00:04:51.990 "nop_in_interval": 30, 00:04:51.990 "disable_chap": false, 00:04:51.990 "require_chap": false, 00:04:51.990 "mutual_chap": false, 00:04:51.990 "chap_group": 0, 00:04:51.990 "max_large_datain_per_connection": 64, 00:04:51.990 "max_r2t_per_connection": 4, 00:04:51.990 "pdu_pool_size": 36864, 00:04:51.990 "immediate_data_pool_size": 16384, 00:04:51.990 "data_out_pool_size": 2048 00:04:51.990 } 00:04:51.990 } 00:04:51.990 ] 00:04:51.990 } 00:04:51.990 ] 00:04:51.990 } 00:04:51.990 20:14:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:51.990 20:14:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 2872303 00:04:51.990 20:14:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@946 -- # '[' -z 2872303 ']' 00:04:51.990 20:14:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # kill -0 2872303 00:04:51.990 20:14:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # uname 00:04:51.990 20:14:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:04:51.991 20:14:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2872303 00:04:51.991 20:14:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:04:51.991 20:14:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:04:51.991 20:14:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2872303' 00:04:51.991 killing process with pid 2872303 00:04:51.991 20:14:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@965 -- # kill 2872303 00:04:51.991 20:14:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # wait 2872303 00:04:52.248 20:14:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=2872544 00:04:52.248 20:14:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/config.json 00:04:52.248 20:14:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:57.517 20:14:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 2872544 00:04:57.517 20:14:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@946 -- # '[' -z 2872544 ']' 00:04:57.517 20:14:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # kill -0 2872544 00:04:57.517 20:14:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # uname 00:04:57.517 20:14:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:04:57.517 20:14:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2872544 00:04:57.517 20:14:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:04:57.517 20:14:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:04:57.517 20:14:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2872544' 00:04:57.517 killing process with pid 2872544 00:04:57.517 20:14:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@965 -- # kill 2872544 00:04:57.517 20:14:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # wait 2872544 00:04:57.517 20:14:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/log.txt 00:04:57.517 20:14:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/log.txt 00:04:57.517 00:04:57.517 real 0m6.722s 00:04:57.517 user 0m6.553s 00:04:57.517 sys 0m0.583s 00:04:57.517 20:14:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:57.517 20:14:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:57.517 ************************************ 00:04:57.517 END TEST skip_rpc_with_json 00:04:57.517 ************************************ 00:04:57.517 20:14:10 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:57.517 20:14:10 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:57.517 20:14:10 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:57.517 20:14:10 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:57.517 ************************************ 00:04:57.517 START TEST skip_rpc_with_delay 00:04:57.517 ************************************ 00:04:57.517 20:14:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1121 -- # test_skip_rpc_with_delay 00:04:57.517 20:14:10 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:57.517 20:14:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:04:57.517 20:14:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:57.517 20:14:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:04:57.517 20:14:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:57.517 20:14:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:04:57.517 20:14:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:57.518 20:14:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:04:57.518 20:14:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:57.518 20:14:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:04:57.518 20:14:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:57.518 20:14:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:57.776 [2024-05-16 20:14:10.543610] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:57.776 [2024-05-16 20:14:10.543667] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:04:57.776 20:14:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:04:57.776 20:14:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:57.776 20:14:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:04:57.776 20:14:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:57.776 00:04:57.776 real 0m0.051s 00:04:57.776 user 0m0.033s 00:04:57.776 sys 0m0.018s 00:04:57.776 20:14:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:57.776 20:14:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:57.776 ************************************ 00:04:57.776 END TEST skip_rpc_with_delay 00:04:57.776 ************************************ 00:04:57.776 20:14:10 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:57.776 20:14:10 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:57.776 20:14:10 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:57.776 20:14:10 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:57.776 20:14:10 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:57.776 20:14:10 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:57.776 ************************************ 00:04:57.776 START TEST exit_on_failed_rpc_init 00:04:57.776 ************************************ 00:04:57.776 20:14:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1121 -- # test_exit_on_failed_rpc_init 00:04:57.776 20:14:10 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=2873521 00:04:57.776 20:14:10 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 2873521 00:04:57.776 20:14:10 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:57.776 20:14:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@827 -- # '[' -z 2873521 ']' 00:04:57.776 20:14:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:57.776 20:14:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@832 -- # local max_retries=100 00:04:57.776 20:14:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:57.776 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:57.776 20:14:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # xtrace_disable 00:04:57.776 20:14:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:57.776 [2024-05-16 20:14:10.681247] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:04:57.776 [2024-05-16 20:14:10.681284] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2873521 ] 00:04:57.776 EAL: No free 2048 kB hugepages reported on node 1 00:04:57.776 [2024-05-16 20:14:10.739677] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:58.035 [2024-05-16 20:14:10.819900] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:58.603 20:14:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:04:58.603 20:14:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # return 0 00:04:58.603 20:14:11 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:58.603 20:14:11 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:58.603 20:14:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:04:58.603 20:14:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:58.603 20:14:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:04:58.603 20:14:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:58.603 20:14:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:04:58.603 20:14:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:58.603 20:14:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:04:58.603 20:14:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:58.603 20:14:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:04:58.603 20:14:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:58.603 20:14:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:58.603 [2024-05-16 20:14:11.522792] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:04:58.603 [2024-05-16 20:14:11.522839] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2873627 ] 00:04:58.603 EAL: No free 2048 kB hugepages reported on node 1 00:04:58.603 [2024-05-16 20:14:11.581330] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:58.862 [2024-05-16 20:14:11.654811] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:58.862 [2024-05-16 20:14:11.654875] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:58.862 [2024-05-16 20:14:11.654884] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:58.862 [2024-05-16 20:14:11.654890] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:58.862 20:14:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:04:58.862 20:14:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:58.862 20:14:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:04:58.862 20:14:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:04:58.862 20:14:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:04:58.862 20:14:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:58.862 20:14:11 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:58.862 20:14:11 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 2873521 00:04:58.862 20:14:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@946 -- # '[' -z 2873521 ']' 00:04:58.862 20:14:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # kill -0 2873521 00:04:58.862 20:14:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@951 -- # uname 00:04:58.862 20:14:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:04:58.862 20:14:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2873521 00:04:58.862 20:14:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:04:58.862 20:14:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:04:58.862 20:14:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2873521' 00:04:58.862 killing process with pid 2873521 00:04:58.862 20:14:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@965 -- # kill 2873521 00:04:58.862 20:14:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@970 -- # wait 2873521 00:04:59.121 00:04:59.121 real 0m1.450s 00:04:59.121 user 0m1.662s 00:04:59.121 sys 0m0.403s 00:04:59.121 20:14:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:59.121 20:14:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:59.121 ************************************ 00:04:59.121 END TEST exit_on_failed_rpc_init 00:04:59.121 ************************************ 00:04:59.121 20:14:12 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/config.json 00:04:59.121 00:04:59.121 real 0m13.932s 00:04:59.121 user 0m13.500s 00:04:59.121 sys 0m1.503s 00:04:59.121 20:14:12 skip_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:59.121 20:14:12 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:59.121 ************************************ 00:04:59.121 END TEST skip_rpc 00:04:59.121 ************************************ 00:04:59.380 20:14:12 -- spdk/autotest.sh@171 -- # run_test rpc_client /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:59.380 20:14:12 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:59.380 20:14:12 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:59.380 20:14:12 -- common/autotest_common.sh@10 -- # set +x 00:04:59.380 ************************************ 00:04:59.380 START TEST rpc_client 00:04:59.380 ************************************ 00:04:59.380 20:14:12 rpc_client -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:59.380 * Looking for test storage... 00:04:59.380 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_client 00:04:59.380 20:14:12 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:04:59.380 OK 00:04:59.380 20:14:12 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:59.380 00:04:59.380 real 0m0.114s 00:04:59.380 user 0m0.049s 00:04:59.380 sys 0m0.073s 00:04:59.380 20:14:12 rpc_client -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:59.380 20:14:12 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:59.380 ************************************ 00:04:59.380 END TEST rpc_client 00:04:59.380 ************************************ 00:04:59.380 20:14:12 -- spdk/autotest.sh@172 -- # run_test json_config /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_config.sh 00:04:59.380 20:14:12 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:59.380 20:14:12 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:59.380 20:14:12 -- common/autotest_common.sh@10 -- # set +x 00:04:59.380 ************************************ 00:04:59.380 START TEST json_config 00:04:59.380 ************************************ 00:04:59.380 20:14:12 json_config -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_config.sh 00:04:59.641 20:14:12 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:04:59.641 20:14:12 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:59.641 20:14:12 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:59.641 20:14:12 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:59.641 20:14:12 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:59.641 20:14:12 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:59.641 20:14:12 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:59.641 20:14:12 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:59.641 20:14:12 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:59.641 20:14:12 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:59.641 20:14:12 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:59.641 20:14:12 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:59.641 20:14:12 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:04:59.641 20:14:12 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:04:59.641 20:14:12 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:59.641 20:14:12 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:59.641 20:14:12 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:59.641 20:14:12 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:59.641 20:14:12 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:04:59.641 20:14:12 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:59.641 20:14:12 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:59.641 20:14:12 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:59.641 20:14:12 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:59.641 20:14:12 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:59.641 20:14:12 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:59.641 20:14:12 json_config -- paths/export.sh@5 -- # export PATH 00:04:59.641 20:14:12 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:59.641 20:14:12 json_config -- nvmf/common.sh@47 -- # : 0 00:04:59.641 20:14:12 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:59.641 20:14:12 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:59.641 20:14:12 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:59.641 20:14:12 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:59.641 20:14:12 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:59.641 20:14:12 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:59.641 20:14:12 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:59.641 20:14:12 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:59.641 20:14:12 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/common.sh 00:04:59.641 20:14:12 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:59.641 20:14:12 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:59.641 20:14:12 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:59.641 20:14:12 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:59.641 20:14:12 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:04:59.641 20:14:12 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:04:59.641 20:14:12 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:04:59.641 20:14:12 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:04:59.641 20:14:12 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:04:59.641 20:14:12 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:04:59.641 20:14:12 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_initiator_config.json') 00:04:59.641 20:14:12 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:04:59.641 20:14:12 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:04:59.641 20:14:12 json_config -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:59.641 20:14:12 json_config -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:04:59.641 INFO: JSON configuration test init 00:04:59.641 20:14:12 json_config -- json_config/json_config.sh@357 -- # json_config_test_init 00:04:59.641 20:14:12 json_config -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:04:59.641 20:14:12 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:04:59.641 20:14:12 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:59.641 20:14:12 json_config -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:04:59.641 20:14:12 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:04:59.641 20:14:12 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:59.641 20:14:12 json_config -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:04:59.641 20:14:12 json_config -- json_config/common.sh@9 -- # local app=target 00:04:59.641 20:14:12 json_config -- json_config/common.sh@10 -- # shift 00:04:59.641 20:14:12 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:59.641 20:14:12 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:59.641 20:14:12 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:59.641 20:14:12 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:59.641 20:14:12 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:59.641 20:14:12 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=2873868 00:04:59.641 20:14:12 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:59.641 Waiting for target to run... 00:04:59.641 20:14:12 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:04:59.641 20:14:12 json_config -- json_config/common.sh@25 -- # waitforlisten 2873868 /var/tmp/spdk_tgt.sock 00:04:59.641 20:14:12 json_config -- common/autotest_common.sh@827 -- # '[' -z 2873868 ']' 00:04:59.641 20:14:12 json_config -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:59.641 20:14:12 json_config -- common/autotest_common.sh@832 -- # local max_retries=100 00:04:59.641 20:14:12 json_config -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:59.642 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:59.642 20:14:12 json_config -- common/autotest_common.sh@836 -- # xtrace_disable 00:04:59.642 20:14:12 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:59.642 [2024-05-16 20:14:12.500568] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:04:59.642 [2024-05-16 20:14:12.500614] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2873868 ] 00:04:59.642 EAL: No free 2048 kB hugepages reported on node 1 00:05:00.214 [2024-05-16 20:14:12.935278] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:00.214 [2024-05-16 20:14:13.024238] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:00.471 20:14:13 json_config -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:00.471 20:14:13 json_config -- common/autotest_common.sh@860 -- # return 0 00:05:00.471 20:14:13 json_config -- json_config/common.sh@26 -- # echo '' 00:05:00.471 00:05:00.471 20:14:13 json_config -- json_config/json_config.sh@269 -- # create_accel_config 00:05:00.471 20:14:13 json_config -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:05:00.471 20:14:13 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:00.471 20:14:13 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:00.471 20:14:13 json_config -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:05:00.471 20:14:13 json_config -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:05:00.471 20:14:13 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:00.471 20:14:13 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:00.471 20:14:13 json_config -- json_config/json_config.sh@273 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:00.471 20:14:13 json_config -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:05:00.471 20:14:13 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:03.757 20:14:16 json_config -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:05:03.757 20:14:16 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:05:03.757 20:14:16 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:03.757 20:14:16 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:03.757 20:14:16 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:05:03.757 20:14:16 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:03.757 20:14:16 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:05:03.757 20:14:16 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:05:03.757 20:14:16 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:05:03.757 20:14:16 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:03.757 20:14:16 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:05:03.757 20:14:16 json_config -- json_config/json_config.sh@48 -- # local get_types 00:05:03.757 20:14:16 json_config -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:05:03.757 20:14:16 json_config -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:05:03.757 20:14:16 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:03.757 20:14:16 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:03.757 20:14:16 json_config -- json_config/json_config.sh@55 -- # return 0 00:05:03.757 20:14:16 json_config -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:05:03.757 20:14:16 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:05:03.757 20:14:16 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:05:03.757 20:14:16 json_config -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:05:03.757 20:14:16 json_config -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:05:03.757 20:14:16 json_config -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:05:03.757 20:14:16 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:03.757 20:14:16 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:03.757 20:14:16 json_config -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:03.757 20:14:16 json_config -- json_config/json_config.sh@233 -- # [[ rdma == \r\d\m\a ]] 00:05:03.757 20:14:16 json_config -- json_config/json_config.sh@234 -- # TEST_TRANSPORT=rdma 00:05:03.757 20:14:16 json_config -- json_config/json_config.sh@234 -- # nvmftestinit 00:05:03.757 20:14:16 json_config -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:05:03.757 20:14:16 json_config -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:03.757 20:14:16 json_config -- nvmf/common.sh@448 -- # prepare_net_devs 00:05:03.757 20:14:16 json_config -- nvmf/common.sh@410 -- # local -g is_hw=no 00:05:03.757 20:14:16 json_config -- nvmf/common.sh@412 -- # remove_spdk_ns 00:05:03.757 20:14:16 json_config -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:03.757 20:14:16 json_config -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:05:03.757 20:14:16 json_config -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:03.757 20:14:16 json_config -- nvmf/common.sh@414 -- # [[ phy-fallback != virt ]] 00:05:03.757 20:14:16 json_config -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:05:03.757 20:14:16 json_config -- nvmf/common.sh@285 -- # xtrace_disable 00:05:03.757 20:14:16 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:10.321 20:14:22 json_config -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:05:10.321 20:14:22 json_config -- nvmf/common.sh@291 -- # pci_devs=() 00:05:10.321 20:14:22 json_config -- nvmf/common.sh@291 -- # local -a pci_devs 00:05:10.321 20:14:22 json_config -- nvmf/common.sh@292 -- # pci_net_devs=() 00:05:10.321 20:14:22 json_config -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:05:10.321 20:14:22 json_config -- nvmf/common.sh@293 -- # pci_drivers=() 00:05:10.321 20:14:22 json_config -- nvmf/common.sh@293 -- # local -A pci_drivers 00:05:10.321 20:14:22 json_config -- nvmf/common.sh@295 -- # net_devs=() 00:05:10.321 20:14:22 json_config -- nvmf/common.sh@295 -- # local -ga net_devs 00:05:10.321 20:14:22 json_config -- nvmf/common.sh@296 -- # e810=() 00:05:10.321 20:14:22 json_config -- nvmf/common.sh@296 -- # local -ga e810 00:05:10.321 20:14:22 json_config -- nvmf/common.sh@297 -- # x722=() 00:05:10.321 20:14:22 json_config -- nvmf/common.sh@297 -- # local -ga x722 00:05:10.321 20:14:22 json_config -- nvmf/common.sh@298 -- # mlx=() 00:05:10.321 20:14:22 json_config -- nvmf/common.sh@298 -- # local -ga mlx 00:05:10.321 20:14:22 json_config -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:10.321 20:14:22 json_config -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:10.321 20:14:22 json_config -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:10.321 20:14:22 json_config -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:10.321 20:14:22 json_config -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:10.321 20:14:22 json_config -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:10.321 20:14:22 json_config -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:10.321 20:14:22 json_config -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:10.321 20:14:22 json_config -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:10.321 20:14:22 json_config -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:10.321 20:14:22 json_config -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:10.321 20:14:22 json_config -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:05:10.321 20:14:22 json_config -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:05:10.321 20:14:22 json_config -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:05:10.321 20:14:22 json_config -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:05:10.321 20:14:22 json_config -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:05:10.321 20:14:22 json_config -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:05:10.321 20:14:22 json_config -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:05:10.321 20:14:22 json_config -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:05:10.321 20:14:22 json_config -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:05:10.321 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:05:10.321 20:14:22 json_config -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:05:10.321 20:14:22 json_config -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:05:10.321 20:14:22 json_config -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:05:10.321 20:14:22 json_config -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:05:10.321 20:14:22 json_config -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:05:10.321 20:14:22 json_config -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:05:10.321 20:14:22 json_config -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:05:10.321 20:14:22 json_config -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:05:10.321 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:05:10.321 20:14:22 json_config -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:05:10.321 20:14:22 json_config -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:05:10.321 20:14:22 json_config -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:05:10.321 20:14:22 json_config -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:05:10.321 20:14:22 json_config -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:05:10.321 20:14:22 json_config -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:05:10.321 20:14:22 json_config -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:05:10.321 20:14:22 json_config -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:05:10.321 20:14:22 json_config -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:05:10.321 20:14:22 json_config -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:10.321 20:14:22 json_config -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:05:10.321 20:14:22 json_config -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:05:10.321 20:14:22 json_config -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:10.321 20:14:22 json_config -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:05:10.321 Found net devices under 0000:da:00.0: mlx_0_0 00:05:10.321 20:14:22 json_config -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:05:10.322 20:14:22 json_config -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:05:10.322 20:14:22 json_config -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:10.322 20:14:22 json_config -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:05:10.322 20:14:22 json_config -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:05:10.322 20:14:22 json_config -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:10.322 20:14:22 json_config -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:05:10.322 Found net devices under 0000:da:00.1: mlx_0_1 00:05:10.322 20:14:22 json_config -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:05:10.322 20:14:22 json_config -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:05:10.322 20:14:22 json_config -- nvmf/common.sh@414 -- # is_hw=yes 00:05:10.322 20:14:22 json_config -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:05:10.322 20:14:22 json_config -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:05:10.322 20:14:22 json_config -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:05:10.322 20:14:22 json_config -- nvmf/common.sh@420 -- # rdma_device_init 00:05:10.322 20:14:22 json_config -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:05:10.322 20:14:22 json_config -- nvmf/common.sh@58 -- # uname 00:05:10.322 20:14:22 json_config -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:05:10.322 20:14:22 json_config -- nvmf/common.sh@62 -- # modprobe ib_cm 00:05:10.322 20:14:22 json_config -- nvmf/common.sh@63 -- # modprobe ib_core 00:05:10.322 20:14:22 json_config -- nvmf/common.sh@64 -- # modprobe ib_umad 00:05:10.322 20:14:22 json_config -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:05:10.322 20:14:22 json_config -- nvmf/common.sh@66 -- # modprobe iw_cm 00:05:10.322 20:14:22 json_config -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:05:10.322 20:14:22 json_config -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:05:10.322 20:14:22 json_config -- nvmf/common.sh@502 -- # allocate_nic_ips 00:05:10.322 20:14:22 json_config -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:05:10.322 20:14:22 json_config -- nvmf/common.sh@73 -- # get_rdma_if_list 00:05:10.322 20:14:22 json_config -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:05:10.322 20:14:22 json_config -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:05:10.322 20:14:22 json_config -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:05:10.322 20:14:22 json_config -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:05:10.322 20:14:22 json_config -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:05:10.322 20:14:22 json_config -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:05:10.322 20:14:22 json_config -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:10.322 20:14:22 json_config -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:05:10.322 20:14:22 json_config -- nvmf/common.sh@104 -- # echo mlx_0_0 00:05:10.322 20:14:22 json_config -- nvmf/common.sh@105 -- # continue 2 00:05:10.322 20:14:22 json_config -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:05:10.322 20:14:22 json_config -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:10.322 20:14:22 json_config -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:05:10.322 20:14:22 json_config -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:10.322 20:14:22 json_config -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:05:10.322 20:14:22 json_config -- nvmf/common.sh@104 -- # echo mlx_0_1 00:05:10.322 20:14:22 json_config -- nvmf/common.sh@105 -- # continue 2 00:05:10.322 20:14:22 json_config -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:05:10.322 20:14:22 json_config -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:05:10.322 20:14:22 json_config -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:05:10.322 20:14:22 json_config -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:05:10.322 20:14:22 json_config -- nvmf/common.sh@113 -- # awk '{print $4}' 00:05:10.322 20:14:22 json_config -- nvmf/common.sh@113 -- # cut -d/ -f1 00:05:10.322 20:14:22 json_config -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:05:10.322 20:14:22 json_config -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:05:10.322 20:14:22 json_config -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:05:10.322 258: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:05:10.322 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:05:10.322 altname enp218s0f0np0 00:05:10.322 altname ens818f0np0 00:05:10.322 inet 192.168.100.8/24 scope global mlx_0_0 00:05:10.322 valid_lft forever preferred_lft forever 00:05:10.322 20:14:22 json_config -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:05:10.322 20:14:22 json_config -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:05:10.322 20:14:22 json_config -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:05:10.322 20:14:22 json_config -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:05:10.322 20:14:22 json_config -- nvmf/common.sh@113 -- # awk '{print $4}' 00:05:10.322 20:14:22 json_config -- nvmf/common.sh@113 -- # cut -d/ -f1 00:05:10.322 20:14:22 json_config -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:05:10.322 20:14:22 json_config -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:05:10.322 20:14:22 json_config -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:05:10.322 259: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:05:10.322 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:05:10.322 altname enp218s0f1np1 00:05:10.322 altname ens818f1np1 00:05:10.322 inet 192.168.100.9/24 scope global mlx_0_1 00:05:10.322 valid_lft forever preferred_lft forever 00:05:10.322 20:14:22 json_config -- nvmf/common.sh@422 -- # return 0 00:05:10.322 20:14:22 json_config -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:05:10.322 20:14:22 json_config -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:05:10.322 20:14:22 json_config -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:05:10.322 20:14:22 json_config -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:05:10.322 20:14:22 json_config -- nvmf/common.sh@86 -- # get_rdma_if_list 00:05:10.322 20:14:22 json_config -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:05:10.322 20:14:22 json_config -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:05:10.322 20:14:22 json_config -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:05:10.322 20:14:22 json_config -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:05:10.322 20:14:22 json_config -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:05:10.322 20:14:22 json_config -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:05:10.322 20:14:22 json_config -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:10.322 20:14:22 json_config -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:05:10.322 20:14:22 json_config -- nvmf/common.sh@104 -- # echo mlx_0_0 00:05:10.322 20:14:22 json_config -- nvmf/common.sh@105 -- # continue 2 00:05:10.322 20:14:22 json_config -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:05:10.322 20:14:22 json_config -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:10.322 20:14:22 json_config -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:05:10.322 20:14:22 json_config -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:10.322 20:14:22 json_config -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:05:10.322 20:14:22 json_config -- nvmf/common.sh@104 -- # echo mlx_0_1 00:05:10.322 20:14:22 json_config -- nvmf/common.sh@105 -- # continue 2 00:05:10.322 20:14:22 json_config -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:05:10.322 20:14:22 json_config -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:05:10.322 20:14:22 json_config -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:05:10.322 20:14:22 json_config -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:05:10.322 20:14:22 json_config -- nvmf/common.sh@113 -- # awk '{print $4}' 00:05:10.322 20:14:22 json_config -- nvmf/common.sh@113 -- # cut -d/ -f1 00:05:10.322 20:14:22 json_config -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:05:10.322 20:14:22 json_config -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:05:10.322 20:14:22 json_config -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:05:10.322 20:14:22 json_config -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:05:10.322 20:14:22 json_config -- nvmf/common.sh@113 -- # awk '{print $4}' 00:05:10.322 20:14:22 json_config -- nvmf/common.sh@113 -- # cut -d/ -f1 00:05:10.322 20:14:22 json_config -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:05:10.322 192.168.100.9' 00:05:10.322 20:14:22 json_config -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:05:10.322 192.168.100.9' 00:05:10.322 20:14:22 json_config -- nvmf/common.sh@457 -- # head -n 1 00:05:10.322 20:14:22 json_config -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:05:10.322 20:14:22 json_config -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:05:10.322 192.168.100.9' 00:05:10.322 20:14:22 json_config -- nvmf/common.sh@458 -- # tail -n +2 00:05:10.322 20:14:22 json_config -- nvmf/common.sh@458 -- # head -n 1 00:05:10.322 20:14:22 json_config -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:05:10.322 20:14:22 json_config -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:05:10.322 20:14:22 json_config -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:05:10.322 20:14:22 json_config -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:05:10.322 20:14:22 json_config -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:05:10.322 20:14:22 json_config -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:05:10.322 20:14:22 json_config -- json_config/json_config.sh@237 -- # [[ -z 192.168.100.8 ]] 00:05:10.322 20:14:22 json_config -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:10.322 20:14:22 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:10.322 MallocForNvmf0 00:05:10.322 20:14:23 json_config -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:10.322 20:14:23 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:10.322 MallocForNvmf1 00:05:10.322 20:14:23 json_config -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t rdma -u 8192 -c 0 00:05:10.322 20:14:23 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t rdma -u 8192 -c 0 00:05:10.581 [2024-05-16 20:14:23.416720] rdma.c:2726:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:05:10.581 [2024-05-16 20:14:23.444230] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x20c7980/0x21f4800) succeed. 00:05:10.581 [2024-05-16 20:14:23.454952] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x20c9b70/0x20d46c0) succeed. 00:05:10.581 20:14:23 json_config -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:10.581 20:14:23 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:10.867 20:14:23 json_config -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:10.867 20:14:23 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:10.867 20:14:23 json_config -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:10.867 20:14:23 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:11.148 20:14:24 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:05:11.148 20:14:24 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:05:11.407 [2024-05-16 20:14:24.163525] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:05:11.407 [2024-05-16 20:14:24.163889] rdma.c:3032:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:05:11.407 20:14:24 json_config -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:05:11.407 20:14:24 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:11.407 20:14:24 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:11.407 20:14:24 json_config -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:05:11.407 20:14:24 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:11.407 20:14:24 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:11.407 20:14:24 json_config -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:05:11.407 20:14:24 json_config -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:11.407 20:14:24 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:11.666 MallocBdevForConfigChangeCheck 00:05:11.666 20:14:24 json_config -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:05:11.666 20:14:24 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:11.666 20:14:24 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:11.666 20:14:24 json_config -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:05:11.666 20:14:24 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:11.925 20:14:24 json_config -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:05:11.925 INFO: shutting down applications... 00:05:11.925 20:14:24 json_config -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:05:11.925 20:14:24 json_config -- json_config/json_config.sh@368 -- # json_config_clear target 00:05:11.925 20:14:24 json_config -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:05:11.925 20:14:24 json_config -- json_config/json_config.sh@333 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:14.458 Calling clear_iscsi_subsystem 00:05:14.458 Calling clear_nvmf_subsystem 00:05:14.458 Calling clear_nbd_subsystem 00:05:14.458 Calling clear_ublk_subsystem 00:05:14.458 Calling clear_vhost_blk_subsystem 00:05:14.458 Calling clear_vhost_scsi_subsystem 00:05:14.458 Calling clear_bdev_subsystem 00:05:14.458 20:14:26 json_config -- json_config/json_config.sh@337 -- # local config_filter=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py 00:05:14.458 20:14:26 json_config -- json_config/json_config.sh@343 -- # count=100 00:05:14.458 20:14:26 json_config -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:05:14.458 20:14:26 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:14.458 20:14:26 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:14.458 20:14:26 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:05:14.458 20:14:27 json_config -- json_config/json_config.sh@345 -- # break 00:05:14.458 20:14:27 json_config -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:05:14.458 20:14:27 json_config -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:05:14.458 20:14:27 json_config -- json_config/common.sh@31 -- # local app=target 00:05:14.458 20:14:27 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:14.458 20:14:27 json_config -- json_config/common.sh@35 -- # [[ -n 2873868 ]] 00:05:14.458 20:14:27 json_config -- json_config/common.sh@38 -- # kill -SIGINT 2873868 00:05:14.458 [2024-05-16 20:14:27.223682] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:05:14.458 20:14:27 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:14.458 20:14:27 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:14.458 20:14:27 json_config -- json_config/common.sh@41 -- # kill -0 2873868 00:05:14.458 20:14:27 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:14.458 [2024-05-16 20:14:27.324166] rdma.c:2885:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:05:15.027 20:14:27 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:15.027 20:14:27 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:15.027 20:14:27 json_config -- json_config/common.sh@41 -- # kill -0 2873868 00:05:15.027 20:14:27 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:15.027 20:14:27 json_config -- json_config/common.sh@43 -- # break 00:05:15.027 20:14:27 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:15.027 20:14:27 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:15.027 SPDK target shutdown done 00:05:15.027 20:14:27 json_config -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:05:15.027 INFO: relaunching applications... 00:05:15.027 20:14:27 json_config -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:05:15.027 20:14:27 json_config -- json_config/common.sh@9 -- # local app=target 00:05:15.027 20:14:27 json_config -- json_config/common.sh@10 -- # shift 00:05:15.027 20:14:27 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:15.027 20:14:27 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:15.027 20:14:27 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:15.027 20:14:27 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:15.027 20:14:27 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:15.027 20:14:27 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=2878897 00:05:15.027 20:14:27 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:05:15.027 20:14:27 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:15.027 Waiting for target to run... 00:05:15.027 20:14:27 json_config -- json_config/common.sh@25 -- # waitforlisten 2878897 /var/tmp/spdk_tgt.sock 00:05:15.027 20:14:27 json_config -- common/autotest_common.sh@827 -- # '[' -z 2878897 ']' 00:05:15.027 20:14:27 json_config -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:15.027 20:14:27 json_config -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:15.027 20:14:27 json_config -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:15.027 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:15.027 20:14:27 json_config -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:15.027 20:14:27 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:15.027 [2024-05-16 20:14:27.785086] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:05:15.027 [2024-05-16 20:14:27.785151] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2878897 ] 00:05:15.027 EAL: No free 2048 kB hugepages reported on node 1 00:05:15.286 [2024-05-16 20:14:28.232274] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:15.544 [2024-05-16 20:14:28.320774] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:18.832 [2024-05-16 20:14:31.355043] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1f488c0/0x2074ec0) succeed. 00:05:18.832 [2024-05-16 20:14:31.365870] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1f4aab0/0x1f54d80) succeed. 00:05:18.832 [2024-05-16 20:14:31.414241] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:05:18.832 [2024-05-16 20:14:31.414579] rdma.c:3032:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:05:19.091 20:14:31 json_config -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:19.091 20:14:31 json_config -- common/autotest_common.sh@860 -- # return 0 00:05:19.091 20:14:31 json_config -- json_config/common.sh@26 -- # echo '' 00:05:19.091 00:05:19.091 20:14:31 json_config -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:05:19.091 20:14:31 json_config -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:19.091 INFO: Checking if target configuration is the same... 00:05:19.091 20:14:31 json_config -- json_config/json_config.sh@378 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:05:19.091 20:14:31 json_config -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:05:19.091 20:14:31 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:19.091 + '[' 2 -ne 2 ']' 00:05:19.091 +++ dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:19.091 ++ readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/../.. 00:05:19.091 + rootdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:05:19.091 +++ basename /dev/fd/62 00:05:19.091 ++ mktemp /tmp/62.XXX 00:05:19.091 + tmp_file_1=/tmp/62.5XA 00:05:19.091 +++ basename /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:05:19.091 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:19.091 + tmp_file_2=/tmp/spdk_tgt_config.json.N7b 00:05:19.091 + ret=0 00:05:19.091 + /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:19.349 + /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:19.349 + diff -u /tmp/62.5XA /tmp/spdk_tgt_config.json.N7b 00:05:19.349 + echo 'INFO: JSON config files are the same' 00:05:19.349 INFO: JSON config files are the same 00:05:19.349 + rm /tmp/62.5XA /tmp/spdk_tgt_config.json.N7b 00:05:19.349 + exit 0 00:05:19.349 20:14:32 json_config -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:05:19.349 20:14:32 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:19.349 INFO: changing configuration and checking if this can be detected... 00:05:19.349 20:14:32 json_config -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:19.349 20:14:32 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:19.608 20:14:32 json_config -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:05:19.608 20:14:32 json_config -- json_config/json_config.sh@387 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:05:19.608 20:14:32 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:19.608 + '[' 2 -ne 2 ']' 00:05:19.608 +++ dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:19.608 ++ readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/../.. 00:05:19.608 + rootdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:05:19.608 +++ basename /dev/fd/62 00:05:19.608 ++ mktemp /tmp/62.XXX 00:05:19.608 + tmp_file_1=/tmp/62.u3o 00:05:19.608 +++ basename /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:05:19.608 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:19.608 + tmp_file_2=/tmp/spdk_tgt_config.json.9OA 00:05:19.608 + ret=0 00:05:19.608 + /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:19.867 + /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:19.867 + diff -u /tmp/62.u3o /tmp/spdk_tgt_config.json.9OA 00:05:19.867 + ret=1 00:05:19.867 + echo '=== Start of file: /tmp/62.u3o ===' 00:05:19.867 + cat /tmp/62.u3o 00:05:19.867 + echo '=== End of file: /tmp/62.u3o ===' 00:05:19.867 + echo '' 00:05:19.867 + echo '=== Start of file: /tmp/spdk_tgt_config.json.9OA ===' 00:05:19.867 + cat /tmp/spdk_tgt_config.json.9OA 00:05:19.867 + echo '=== End of file: /tmp/spdk_tgt_config.json.9OA ===' 00:05:19.867 + echo '' 00:05:19.867 + rm /tmp/62.u3o /tmp/spdk_tgt_config.json.9OA 00:05:19.867 + exit 1 00:05:19.867 20:14:32 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:05:19.867 INFO: configuration change detected. 00:05:19.867 20:14:32 json_config -- json_config/json_config.sh@394 -- # json_config_test_fini 00:05:19.867 20:14:32 json_config -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:05:19.867 20:14:32 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:19.867 20:14:32 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:19.867 20:14:32 json_config -- json_config/json_config.sh@307 -- # local ret=0 00:05:19.867 20:14:32 json_config -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:05:19.867 20:14:32 json_config -- json_config/json_config.sh@317 -- # [[ -n 2878897 ]] 00:05:19.867 20:14:32 json_config -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:05:19.867 20:14:32 json_config -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:05:19.867 20:14:32 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:19.867 20:14:32 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:19.867 20:14:32 json_config -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:05:19.867 20:14:32 json_config -- json_config/json_config.sh@193 -- # uname -s 00:05:19.867 20:14:32 json_config -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:05:19.867 20:14:32 json_config -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:05:19.867 20:14:32 json_config -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:05:19.867 20:14:32 json_config -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:05:19.867 20:14:32 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:19.867 20:14:32 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:20.126 20:14:32 json_config -- json_config/json_config.sh@323 -- # killprocess 2878897 00:05:20.126 20:14:32 json_config -- common/autotest_common.sh@946 -- # '[' -z 2878897 ']' 00:05:20.126 20:14:32 json_config -- common/autotest_common.sh@950 -- # kill -0 2878897 00:05:20.126 20:14:32 json_config -- common/autotest_common.sh@951 -- # uname 00:05:20.126 20:14:32 json_config -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:20.126 20:14:32 json_config -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2878897 00:05:20.126 20:14:32 json_config -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:20.126 20:14:32 json_config -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:20.126 20:14:32 json_config -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2878897' 00:05:20.126 killing process with pid 2878897 00:05:20.126 20:14:32 json_config -- common/autotest_common.sh@965 -- # kill 2878897 00:05:20.126 [2024-05-16 20:14:32.908589] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:05:20.126 20:14:32 json_config -- common/autotest_common.sh@970 -- # wait 2878897 00:05:20.126 [2024-05-16 20:14:33.007701] rdma.c:2885:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:05:22.120 20:14:35 json_config -- json_config/json_config.sh@326 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:05:22.120 20:14:35 json_config -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:05:22.120 20:14:35 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:22.120 20:14:35 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:22.120 20:14:35 json_config -- json_config/json_config.sh@328 -- # return 0 00:05:22.120 20:14:35 json_config -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:05:22.120 INFO: Success 00:05:22.120 20:14:35 json_config -- json_config/json_config.sh@1 -- # nvmftestfini 00:05:22.120 20:14:35 json_config -- nvmf/common.sh@488 -- # nvmfcleanup 00:05:22.120 20:14:35 json_config -- nvmf/common.sh@117 -- # sync 00:05:22.120 20:14:35 json_config -- nvmf/common.sh@119 -- # '[' '' == tcp ']' 00:05:22.120 20:14:35 json_config -- nvmf/common.sh@119 -- # '[' '' == rdma ']' 00:05:22.120 20:14:35 json_config -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:05:22.120 20:14:35 json_config -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:05:22.120 20:14:35 json_config -- nvmf/common.sh@495 -- # [[ '' == \t\c\p ]] 00:05:22.120 00:05:22.120 real 0m22.717s 00:05:22.120 user 0m24.901s 00:05:22.120 sys 0m6.891s 00:05:22.120 20:14:35 json_config -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:22.120 20:14:35 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:22.120 ************************************ 00:05:22.120 END TEST json_config 00:05:22.120 ************************************ 00:05:22.379 20:14:35 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:22.379 20:14:35 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:22.379 20:14:35 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:22.379 20:14:35 -- common/autotest_common.sh@10 -- # set +x 00:05:22.379 ************************************ 00:05:22.379 START TEST json_config_extra_key 00:05:22.379 ************************************ 00:05:22.379 20:14:35 json_config_extra_key -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:22.379 20:14:35 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:05:22.379 20:14:35 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:22.379 20:14:35 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:22.379 20:14:35 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:22.379 20:14:35 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:22.379 20:14:35 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:22.379 20:14:35 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:22.379 20:14:35 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:22.379 20:14:35 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:22.379 20:14:35 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:22.379 20:14:35 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:22.379 20:14:35 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:22.379 20:14:35 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:05:22.379 20:14:35 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:05:22.379 20:14:35 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:22.379 20:14:35 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:22.379 20:14:35 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:22.379 20:14:35 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:22.379 20:14:35 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:05:22.379 20:14:35 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:22.379 20:14:35 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:22.379 20:14:35 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:22.379 20:14:35 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:22.379 20:14:35 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:22.379 20:14:35 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:22.379 20:14:35 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:22.379 20:14:35 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:22.379 20:14:35 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:05:22.379 20:14:35 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:22.379 20:14:35 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:22.379 20:14:35 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:22.379 20:14:35 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:22.379 20:14:35 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:22.379 20:14:35 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:22.379 20:14:35 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:22.379 20:14:35 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:22.379 20:14:35 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/common.sh 00:05:22.379 20:14:35 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:22.379 20:14:35 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:22.379 20:14:35 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:22.379 20:14:35 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:22.379 20:14:35 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:22.379 20:14:35 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:22.379 20:14:35 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/extra_key.json') 00:05:22.379 20:14:35 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:22.379 20:14:35 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:22.379 20:14:35 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:22.379 INFO: launching applications... 00:05:22.379 20:14:35 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/extra_key.json 00:05:22.379 20:14:35 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:22.379 20:14:35 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:22.379 20:14:35 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:22.379 20:14:35 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:22.379 20:14:35 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:22.379 20:14:35 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:22.379 20:14:35 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:22.379 20:14:35 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=2880389 00:05:22.379 20:14:35 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:22.379 Waiting for target to run... 00:05:22.379 20:14:35 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 2880389 /var/tmp/spdk_tgt.sock 00:05:22.379 20:14:35 json_config_extra_key -- common/autotest_common.sh@827 -- # '[' -z 2880389 ']' 00:05:22.379 20:14:35 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/extra_key.json 00:05:22.379 20:14:35 json_config_extra_key -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:22.379 20:14:35 json_config_extra_key -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:22.379 20:14:35 json_config_extra_key -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:22.379 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:22.379 20:14:35 json_config_extra_key -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:22.379 20:14:35 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:22.379 [2024-05-16 20:14:35.300224] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:05:22.379 [2024-05-16 20:14:35.300277] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2880389 ] 00:05:22.379 EAL: No free 2048 kB hugepages reported on node 1 00:05:22.947 [2024-05-16 20:14:35.742234] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:22.947 [2024-05-16 20:14:35.834065] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:23.206 20:14:36 json_config_extra_key -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:23.206 20:14:36 json_config_extra_key -- common/autotest_common.sh@860 -- # return 0 00:05:23.206 20:14:36 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:23.206 00:05:23.206 20:14:36 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:23.206 INFO: shutting down applications... 00:05:23.206 20:14:36 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:23.206 20:14:36 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:23.206 20:14:36 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:23.206 20:14:36 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 2880389 ]] 00:05:23.206 20:14:36 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 2880389 00:05:23.206 20:14:36 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:23.206 20:14:36 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:23.206 20:14:36 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2880389 00:05:23.206 20:14:36 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:23.774 20:14:36 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:23.774 20:14:36 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:23.774 20:14:36 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2880389 00:05:23.774 20:14:36 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:23.774 20:14:36 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:23.774 20:14:36 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:23.774 20:14:36 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:23.774 SPDK target shutdown done 00:05:23.774 20:14:36 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:23.774 Success 00:05:23.774 00:05:23.774 real 0m1.447s 00:05:23.774 user 0m1.068s 00:05:23.774 sys 0m0.532s 00:05:23.774 20:14:36 json_config_extra_key -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:23.774 20:14:36 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:23.774 ************************************ 00:05:23.774 END TEST json_config_extra_key 00:05:23.774 ************************************ 00:05:23.774 20:14:36 -- spdk/autotest.sh@174 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:23.774 20:14:36 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:23.774 20:14:36 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:23.774 20:14:36 -- common/autotest_common.sh@10 -- # set +x 00:05:23.774 ************************************ 00:05:23.774 START TEST alias_rpc 00:05:23.774 ************************************ 00:05:23.774 20:14:36 alias_rpc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:23.774 * Looking for test storage... 00:05:23.774 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/alias_rpc 00:05:23.774 20:14:36 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:23.774 20:14:36 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=2880671 00:05:23.774 20:14:36 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:05:23.774 20:14:36 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 2880671 00:05:23.774 20:14:36 alias_rpc -- common/autotest_common.sh@827 -- # '[' -z 2880671 ']' 00:05:23.774 20:14:36 alias_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:23.774 20:14:36 alias_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:23.774 20:14:36 alias_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:23.774 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:23.774 20:14:36 alias_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:23.774 20:14:36 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:24.033 [2024-05-16 20:14:36.796151] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:05:24.033 [2024-05-16 20:14:36.796193] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2880671 ] 00:05:24.033 EAL: No free 2048 kB hugepages reported on node 1 00:05:24.033 [2024-05-16 20:14:36.856147] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:24.033 [2024-05-16 20:14:36.931893] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:24.968 20:14:37 alias_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:24.968 20:14:37 alias_rpc -- common/autotest_common.sh@860 -- # return 0 00:05:24.968 20:14:37 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py load_config -i 00:05:24.968 20:14:37 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 2880671 00:05:24.968 20:14:37 alias_rpc -- common/autotest_common.sh@946 -- # '[' -z 2880671 ']' 00:05:24.968 20:14:37 alias_rpc -- common/autotest_common.sh@950 -- # kill -0 2880671 00:05:24.968 20:14:37 alias_rpc -- common/autotest_common.sh@951 -- # uname 00:05:24.968 20:14:37 alias_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:24.968 20:14:37 alias_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2880671 00:05:24.968 20:14:37 alias_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:24.968 20:14:37 alias_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:24.968 20:14:37 alias_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2880671' 00:05:24.968 killing process with pid 2880671 00:05:24.968 20:14:37 alias_rpc -- common/autotest_common.sh@965 -- # kill 2880671 00:05:24.968 20:14:37 alias_rpc -- common/autotest_common.sh@970 -- # wait 2880671 00:05:25.226 00:05:25.226 real 0m1.478s 00:05:25.226 user 0m1.624s 00:05:25.226 sys 0m0.398s 00:05:25.226 20:14:38 alias_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:25.226 20:14:38 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:25.226 ************************************ 00:05:25.226 END TEST alias_rpc 00:05:25.226 ************************************ 00:05:25.226 20:14:38 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:05:25.226 20:14:38 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:25.226 20:14:38 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:25.226 20:14:38 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:25.226 20:14:38 -- common/autotest_common.sh@10 -- # set +x 00:05:25.226 ************************************ 00:05:25.226 START TEST spdkcli_tcp 00:05:25.226 ************************************ 00:05:25.226 20:14:38 spdkcli_tcp -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:25.484 * Looking for test storage... 00:05:25.484 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli 00:05:25.484 20:14:38 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/common.sh 00:05:25.484 20:14:38 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:05:25.484 20:14:38 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/clear_config.py 00:05:25.484 20:14:38 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:25.484 20:14:38 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:25.484 20:14:38 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:25.484 20:14:38 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:25.484 20:14:38 spdkcli_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:25.484 20:14:38 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:25.484 20:14:38 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=2880963 00:05:25.484 20:14:38 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 2880963 00:05:25.484 20:14:38 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:25.484 20:14:38 spdkcli_tcp -- common/autotest_common.sh@827 -- # '[' -z 2880963 ']' 00:05:25.484 20:14:38 spdkcli_tcp -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:25.484 20:14:38 spdkcli_tcp -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:25.484 20:14:38 spdkcli_tcp -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:25.484 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:25.484 20:14:38 spdkcli_tcp -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:25.484 20:14:38 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:25.484 [2024-05-16 20:14:38.351439] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:05:25.484 [2024-05-16 20:14:38.351480] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2880963 ] 00:05:25.484 EAL: No free 2048 kB hugepages reported on node 1 00:05:25.484 [2024-05-16 20:14:38.408004] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:25.742 [2024-05-16 20:14:38.484170] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:25.742 [2024-05-16 20:14:38.484172] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:26.309 20:14:39 spdkcli_tcp -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:26.309 20:14:39 spdkcli_tcp -- common/autotest_common.sh@860 -- # return 0 00:05:26.309 20:14:39 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=2881194 00:05:26.309 20:14:39 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:26.309 20:14:39 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:26.568 [ 00:05:26.568 "bdev_malloc_delete", 00:05:26.568 "bdev_malloc_create", 00:05:26.568 "bdev_null_resize", 00:05:26.568 "bdev_null_delete", 00:05:26.568 "bdev_null_create", 00:05:26.568 "bdev_nvme_cuse_unregister", 00:05:26.568 "bdev_nvme_cuse_register", 00:05:26.568 "bdev_opal_new_user", 00:05:26.568 "bdev_opal_set_lock_state", 00:05:26.568 "bdev_opal_delete", 00:05:26.568 "bdev_opal_get_info", 00:05:26.568 "bdev_opal_create", 00:05:26.568 "bdev_nvme_opal_revert", 00:05:26.568 "bdev_nvme_opal_init", 00:05:26.568 "bdev_nvme_send_cmd", 00:05:26.568 "bdev_nvme_get_path_iostat", 00:05:26.568 "bdev_nvme_get_mdns_discovery_info", 00:05:26.568 "bdev_nvme_stop_mdns_discovery", 00:05:26.568 "bdev_nvme_start_mdns_discovery", 00:05:26.568 "bdev_nvme_set_multipath_policy", 00:05:26.568 "bdev_nvme_set_preferred_path", 00:05:26.568 "bdev_nvme_get_io_paths", 00:05:26.568 "bdev_nvme_remove_error_injection", 00:05:26.568 "bdev_nvme_add_error_injection", 00:05:26.568 "bdev_nvme_get_discovery_info", 00:05:26.568 "bdev_nvme_stop_discovery", 00:05:26.568 "bdev_nvme_start_discovery", 00:05:26.568 "bdev_nvme_get_controller_health_info", 00:05:26.568 "bdev_nvme_disable_controller", 00:05:26.568 "bdev_nvme_enable_controller", 00:05:26.568 "bdev_nvme_reset_controller", 00:05:26.568 "bdev_nvme_get_transport_statistics", 00:05:26.568 "bdev_nvme_apply_firmware", 00:05:26.568 "bdev_nvme_detach_controller", 00:05:26.568 "bdev_nvme_get_controllers", 00:05:26.568 "bdev_nvme_attach_controller", 00:05:26.568 "bdev_nvme_set_hotplug", 00:05:26.568 "bdev_nvme_set_options", 00:05:26.568 "bdev_passthru_delete", 00:05:26.568 "bdev_passthru_create", 00:05:26.568 "bdev_lvol_set_parent_bdev", 00:05:26.568 "bdev_lvol_set_parent", 00:05:26.568 "bdev_lvol_check_shallow_copy", 00:05:26.568 "bdev_lvol_start_shallow_copy", 00:05:26.568 "bdev_lvol_grow_lvstore", 00:05:26.568 "bdev_lvol_get_lvols", 00:05:26.568 "bdev_lvol_get_lvstores", 00:05:26.568 "bdev_lvol_delete", 00:05:26.568 "bdev_lvol_set_read_only", 00:05:26.568 "bdev_lvol_resize", 00:05:26.568 "bdev_lvol_decouple_parent", 00:05:26.568 "bdev_lvol_inflate", 00:05:26.568 "bdev_lvol_rename", 00:05:26.568 "bdev_lvol_clone_bdev", 00:05:26.568 "bdev_lvol_clone", 00:05:26.568 "bdev_lvol_snapshot", 00:05:26.568 "bdev_lvol_create", 00:05:26.568 "bdev_lvol_delete_lvstore", 00:05:26.568 "bdev_lvol_rename_lvstore", 00:05:26.568 "bdev_lvol_create_lvstore", 00:05:26.568 "bdev_raid_set_options", 00:05:26.568 "bdev_raid_remove_base_bdev", 00:05:26.568 "bdev_raid_add_base_bdev", 00:05:26.568 "bdev_raid_delete", 00:05:26.568 "bdev_raid_create", 00:05:26.568 "bdev_raid_get_bdevs", 00:05:26.568 "bdev_error_inject_error", 00:05:26.568 "bdev_error_delete", 00:05:26.568 "bdev_error_create", 00:05:26.568 "bdev_split_delete", 00:05:26.568 "bdev_split_create", 00:05:26.568 "bdev_delay_delete", 00:05:26.568 "bdev_delay_create", 00:05:26.568 "bdev_delay_update_latency", 00:05:26.568 "bdev_zone_block_delete", 00:05:26.568 "bdev_zone_block_create", 00:05:26.568 "blobfs_create", 00:05:26.568 "blobfs_detect", 00:05:26.568 "blobfs_set_cache_size", 00:05:26.568 "bdev_aio_delete", 00:05:26.568 "bdev_aio_rescan", 00:05:26.568 "bdev_aio_create", 00:05:26.568 "bdev_ftl_set_property", 00:05:26.568 "bdev_ftl_get_properties", 00:05:26.568 "bdev_ftl_get_stats", 00:05:26.568 "bdev_ftl_unmap", 00:05:26.568 "bdev_ftl_unload", 00:05:26.568 "bdev_ftl_delete", 00:05:26.568 "bdev_ftl_load", 00:05:26.568 "bdev_ftl_create", 00:05:26.568 "bdev_virtio_attach_controller", 00:05:26.568 "bdev_virtio_scsi_get_devices", 00:05:26.568 "bdev_virtio_detach_controller", 00:05:26.568 "bdev_virtio_blk_set_hotplug", 00:05:26.568 "bdev_iscsi_delete", 00:05:26.568 "bdev_iscsi_create", 00:05:26.568 "bdev_iscsi_set_options", 00:05:26.568 "accel_error_inject_error", 00:05:26.568 "ioat_scan_accel_module", 00:05:26.568 "dsa_scan_accel_module", 00:05:26.568 "iaa_scan_accel_module", 00:05:26.568 "keyring_file_remove_key", 00:05:26.568 "keyring_file_add_key", 00:05:26.568 "iscsi_get_histogram", 00:05:26.568 "iscsi_enable_histogram", 00:05:26.568 "iscsi_set_options", 00:05:26.568 "iscsi_get_auth_groups", 00:05:26.568 "iscsi_auth_group_remove_secret", 00:05:26.568 "iscsi_auth_group_add_secret", 00:05:26.568 "iscsi_delete_auth_group", 00:05:26.568 "iscsi_create_auth_group", 00:05:26.568 "iscsi_set_discovery_auth", 00:05:26.568 "iscsi_get_options", 00:05:26.568 "iscsi_target_node_request_logout", 00:05:26.568 "iscsi_target_node_set_redirect", 00:05:26.568 "iscsi_target_node_set_auth", 00:05:26.568 "iscsi_target_node_add_lun", 00:05:26.568 "iscsi_get_stats", 00:05:26.568 "iscsi_get_connections", 00:05:26.568 "iscsi_portal_group_set_auth", 00:05:26.568 "iscsi_start_portal_group", 00:05:26.568 "iscsi_delete_portal_group", 00:05:26.568 "iscsi_create_portal_group", 00:05:26.568 "iscsi_get_portal_groups", 00:05:26.568 "iscsi_delete_target_node", 00:05:26.568 "iscsi_target_node_remove_pg_ig_maps", 00:05:26.568 "iscsi_target_node_add_pg_ig_maps", 00:05:26.568 "iscsi_create_target_node", 00:05:26.568 "iscsi_get_target_nodes", 00:05:26.568 "iscsi_delete_initiator_group", 00:05:26.568 "iscsi_initiator_group_remove_initiators", 00:05:26.568 "iscsi_initiator_group_add_initiators", 00:05:26.568 "iscsi_create_initiator_group", 00:05:26.568 "iscsi_get_initiator_groups", 00:05:26.568 "nvmf_set_crdt", 00:05:26.568 "nvmf_set_config", 00:05:26.568 "nvmf_set_max_subsystems", 00:05:26.568 "nvmf_stop_mdns_prr", 00:05:26.568 "nvmf_publish_mdns_prr", 00:05:26.568 "nvmf_subsystem_get_listeners", 00:05:26.568 "nvmf_subsystem_get_qpairs", 00:05:26.568 "nvmf_subsystem_get_controllers", 00:05:26.568 "nvmf_get_stats", 00:05:26.568 "nvmf_get_transports", 00:05:26.568 "nvmf_create_transport", 00:05:26.568 "nvmf_get_targets", 00:05:26.568 "nvmf_delete_target", 00:05:26.568 "nvmf_create_target", 00:05:26.568 "nvmf_subsystem_allow_any_host", 00:05:26.568 "nvmf_subsystem_remove_host", 00:05:26.568 "nvmf_subsystem_add_host", 00:05:26.568 "nvmf_ns_remove_host", 00:05:26.568 "nvmf_ns_add_host", 00:05:26.568 "nvmf_subsystem_remove_ns", 00:05:26.568 "nvmf_subsystem_add_ns", 00:05:26.568 "nvmf_subsystem_listener_set_ana_state", 00:05:26.568 "nvmf_discovery_get_referrals", 00:05:26.568 "nvmf_discovery_remove_referral", 00:05:26.568 "nvmf_discovery_add_referral", 00:05:26.568 "nvmf_subsystem_remove_listener", 00:05:26.568 "nvmf_subsystem_add_listener", 00:05:26.568 "nvmf_delete_subsystem", 00:05:26.568 "nvmf_create_subsystem", 00:05:26.568 "nvmf_get_subsystems", 00:05:26.568 "env_dpdk_get_mem_stats", 00:05:26.568 "nbd_get_disks", 00:05:26.568 "nbd_stop_disk", 00:05:26.568 "nbd_start_disk", 00:05:26.568 "ublk_recover_disk", 00:05:26.568 "ublk_get_disks", 00:05:26.568 "ublk_stop_disk", 00:05:26.568 "ublk_start_disk", 00:05:26.568 "ublk_destroy_target", 00:05:26.568 "ublk_create_target", 00:05:26.568 "virtio_blk_create_transport", 00:05:26.568 "virtio_blk_get_transports", 00:05:26.568 "vhost_controller_set_coalescing", 00:05:26.568 "vhost_get_controllers", 00:05:26.568 "vhost_delete_controller", 00:05:26.568 "vhost_create_blk_controller", 00:05:26.568 "vhost_scsi_controller_remove_target", 00:05:26.568 "vhost_scsi_controller_add_target", 00:05:26.568 "vhost_start_scsi_controller", 00:05:26.568 "vhost_create_scsi_controller", 00:05:26.568 "thread_set_cpumask", 00:05:26.568 "framework_get_scheduler", 00:05:26.568 "framework_set_scheduler", 00:05:26.568 "framework_get_reactors", 00:05:26.568 "thread_get_io_channels", 00:05:26.568 "thread_get_pollers", 00:05:26.568 "thread_get_stats", 00:05:26.568 "framework_monitor_context_switch", 00:05:26.568 "spdk_kill_instance", 00:05:26.568 "log_enable_timestamps", 00:05:26.568 "log_get_flags", 00:05:26.568 "log_clear_flag", 00:05:26.568 "log_set_flag", 00:05:26.568 "log_get_level", 00:05:26.568 "log_set_level", 00:05:26.568 "log_get_print_level", 00:05:26.568 "log_set_print_level", 00:05:26.568 "framework_enable_cpumask_locks", 00:05:26.568 "framework_disable_cpumask_locks", 00:05:26.568 "framework_wait_init", 00:05:26.568 "framework_start_init", 00:05:26.568 "scsi_get_devices", 00:05:26.568 "bdev_get_histogram", 00:05:26.568 "bdev_enable_histogram", 00:05:26.568 "bdev_set_qos_limit", 00:05:26.568 "bdev_set_qd_sampling_period", 00:05:26.568 "bdev_get_bdevs", 00:05:26.568 "bdev_reset_iostat", 00:05:26.568 "bdev_get_iostat", 00:05:26.568 "bdev_examine", 00:05:26.568 "bdev_wait_for_examine", 00:05:26.568 "bdev_set_options", 00:05:26.568 "notify_get_notifications", 00:05:26.568 "notify_get_types", 00:05:26.568 "accel_get_stats", 00:05:26.568 "accel_set_options", 00:05:26.568 "accel_set_driver", 00:05:26.568 "accel_crypto_key_destroy", 00:05:26.568 "accel_crypto_keys_get", 00:05:26.568 "accel_crypto_key_create", 00:05:26.568 "accel_assign_opc", 00:05:26.568 "accel_get_module_info", 00:05:26.568 "accel_get_opc_assignments", 00:05:26.568 "vmd_rescan", 00:05:26.568 "vmd_remove_device", 00:05:26.568 "vmd_enable", 00:05:26.568 "sock_get_default_impl", 00:05:26.568 "sock_set_default_impl", 00:05:26.568 "sock_impl_set_options", 00:05:26.568 "sock_impl_get_options", 00:05:26.568 "iobuf_get_stats", 00:05:26.568 "iobuf_set_options", 00:05:26.568 "framework_get_pci_devices", 00:05:26.568 "framework_get_config", 00:05:26.568 "framework_get_subsystems", 00:05:26.568 "trace_get_info", 00:05:26.568 "trace_get_tpoint_group_mask", 00:05:26.568 "trace_disable_tpoint_group", 00:05:26.568 "trace_enable_tpoint_group", 00:05:26.568 "trace_clear_tpoint_mask", 00:05:26.568 "trace_set_tpoint_mask", 00:05:26.568 "keyring_get_keys", 00:05:26.568 "spdk_get_version", 00:05:26.568 "rpc_get_methods" 00:05:26.568 ] 00:05:26.568 20:14:39 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:26.568 20:14:39 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:26.568 20:14:39 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:26.568 20:14:39 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:26.568 20:14:39 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 2880963 00:05:26.568 20:14:39 spdkcli_tcp -- common/autotest_common.sh@946 -- # '[' -z 2880963 ']' 00:05:26.568 20:14:39 spdkcli_tcp -- common/autotest_common.sh@950 -- # kill -0 2880963 00:05:26.568 20:14:39 spdkcli_tcp -- common/autotest_common.sh@951 -- # uname 00:05:26.569 20:14:39 spdkcli_tcp -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:26.569 20:14:39 spdkcli_tcp -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2880963 00:05:26.569 20:14:39 spdkcli_tcp -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:26.569 20:14:39 spdkcli_tcp -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:26.569 20:14:39 spdkcli_tcp -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2880963' 00:05:26.569 killing process with pid 2880963 00:05:26.569 20:14:39 spdkcli_tcp -- common/autotest_common.sh@965 -- # kill 2880963 00:05:26.569 20:14:39 spdkcli_tcp -- common/autotest_common.sh@970 -- # wait 2880963 00:05:26.827 00:05:26.827 real 0m1.545s 00:05:26.827 user 0m2.926s 00:05:26.827 sys 0m0.434s 00:05:26.827 20:14:39 spdkcli_tcp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:26.827 20:14:39 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:26.827 ************************************ 00:05:26.827 END TEST spdkcli_tcp 00:05:26.827 ************************************ 00:05:26.827 20:14:39 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:26.827 20:14:39 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:26.828 20:14:39 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:26.828 20:14:39 -- common/autotest_common.sh@10 -- # set +x 00:05:27.086 ************************************ 00:05:27.086 START TEST dpdk_mem_utility 00:05:27.086 ************************************ 00:05:27.086 20:14:39 dpdk_mem_utility -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:27.086 * Looking for test storage... 00:05:27.086 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dpdk_memory_utility 00:05:27.086 20:14:39 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:27.086 20:14:39 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=2881259 00:05:27.086 20:14:39 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 2881259 00:05:27.086 20:14:39 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:05:27.086 20:14:39 dpdk_mem_utility -- common/autotest_common.sh@827 -- # '[' -z 2881259 ']' 00:05:27.086 20:14:39 dpdk_mem_utility -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:27.086 20:14:39 dpdk_mem_utility -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:27.086 20:14:39 dpdk_mem_utility -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:27.086 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:27.086 20:14:39 dpdk_mem_utility -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:27.086 20:14:39 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:27.086 [2024-05-16 20:14:39.966920] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:05:27.086 [2024-05-16 20:14:39.966973] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2881259 ] 00:05:27.086 EAL: No free 2048 kB hugepages reported on node 1 00:05:27.086 [2024-05-16 20:14:40.029668] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:27.345 [2024-05-16 20:14:40.116354] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:27.912 20:14:40 dpdk_mem_utility -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:27.912 20:14:40 dpdk_mem_utility -- common/autotest_common.sh@860 -- # return 0 00:05:27.912 20:14:40 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:27.912 20:14:40 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:27.912 20:14:40 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:27.912 20:14:40 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:27.912 { 00:05:27.912 "filename": "/tmp/spdk_mem_dump.txt" 00:05:27.912 } 00:05:27.912 20:14:40 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:27.912 20:14:40 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:27.912 DPDK memory size 814.000000 MiB in 1 heap(s) 00:05:27.912 1 heaps totaling size 814.000000 MiB 00:05:27.912 size: 814.000000 MiB heap id: 0 00:05:27.912 end heaps---------- 00:05:27.912 8 mempools totaling size 598.116089 MiB 00:05:27.912 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:27.912 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:27.912 size: 84.521057 MiB name: bdev_io_2881259 00:05:27.912 size: 51.011292 MiB name: evtpool_2881259 00:05:27.912 size: 50.003479 MiB name: msgpool_2881259 00:05:27.912 size: 21.763794 MiB name: PDU_Pool 00:05:27.912 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:27.912 size: 0.026123 MiB name: Session_Pool 00:05:27.912 end mempools------- 00:05:27.912 6 memzones totaling size 4.142822 MiB 00:05:27.912 size: 1.000366 MiB name: RG_ring_0_2881259 00:05:27.912 size: 1.000366 MiB name: RG_ring_1_2881259 00:05:27.912 size: 1.000366 MiB name: RG_ring_4_2881259 00:05:27.912 size: 1.000366 MiB name: RG_ring_5_2881259 00:05:27.912 size: 0.125366 MiB name: RG_ring_2_2881259 00:05:27.912 size: 0.015991 MiB name: RG_ring_3_2881259 00:05:27.912 end memzones------- 00:05:27.912 20:14:40 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:05:27.912 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:05:27.912 list of free elements. size: 12.519348 MiB 00:05:27.912 element at address: 0x200000400000 with size: 1.999512 MiB 00:05:27.912 element at address: 0x200018e00000 with size: 0.999878 MiB 00:05:27.912 element at address: 0x200019000000 with size: 0.999878 MiB 00:05:27.912 element at address: 0x200003e00000 with size: 0.996277 MiB 00:05:27.912 element at address: 0x200031c00000 with size: 0.994446 MiB 00:05:27.912 element at address: 0x200013800000 with size: 0.978699 MiB 00:05:27.912 element at address: 0x200007000000 with size: 0.959839 MiB 00:05:27.912 element at address: 0x200019200000 with size: 0.936584 MiB 00:05:27.912 element at address: 0x200000200000 with size: 0.841614 MiB 00:05:27.912 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:05:27.912 element at address: 0x20000b200000 with size: 0.490723 MiB 00:05:27.912 element at address: 0x200000800000 with size: 0.487793 MiB 00:05:27.912 element at address: 0x200019400000 with size: 0.485657 MiB 00:05:27.912 element at address: 0x200027e00000 with size: 0.410034 MiB 00:05:27.912 element at address: 0x200003a00000 with size: 0.355530 MiB 00:05:27.912 list of standard malloc elements. size: 199.218079 MiB 00:05:27.912 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:05:27.912 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:05:27.912 element at address: 0x200018efff80 with size: 1.000122 MiB 00:05:27.912 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:05:27.912 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:05:27.912 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:27.912 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:05:27.912 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:27.912 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:05:27.912 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:05:27.912 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:05:27.912 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:05:27.912 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:05:27.912 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:05:27.912 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:27.912 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:27.912 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:05:27.912 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:05:27.912 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:05:27.912 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:05:27.912 element at address: 0x200003adb300 with size: 0.000183 MiB 00:05:27.912 element at address: 0x200003adb500 with size: 0.000183 MiB 00:05:27.912 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:05:27.912 element at address: 0x200003affa80 with size: 0.000183 MiB 00:05:27.912 element at address: 0x200003affb40 with size: 0.000183 MiB 00:05:27.912 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:05:27.912 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:05:27.912 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:05:27.912 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:05:27.912 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:05:27.912 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:05:27.912 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:05:27.912 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:05:27.912 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:05:27.912 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:05:27.912 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:05:27.912 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:05:27.912 element at address: 0x200027e69040 with size: 0.000183 MiB 00:05:27.912 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:05:27.912 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:05:27.912 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:05:27.912 list of memzone associated elements. size: 602.262573 MiB 00:05:27.912 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:05:27.912 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:27.912 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:05:27.912 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:27.912 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:05:27.912 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_2881259_0 00:05:27.912 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:05:27.912 associated memzone info: size: 48.002930 MiB name: MP_evtpool_2881259_0 00:05:27.913 element at address: 0x200003fff380 with size: 48.003052 MiB 00:05:27.913 associated memzone info: size: 48.002930 MiB name: MP_msgpool_2881259_0 00:05:27.913 element at address: 0x2000195be940 with size: 20.255554 MiB 00:05:27.913 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:27.913 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:05:27.913 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:27.913 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:05:27.913 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_2881259 00:05:27.913 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:05:27.913 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_2881259 00:05:27.913 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:27.913 associated memzone info: size: 1.007996 MiB name: MP_evtpool_2881259 00:05:27.913 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:05:27.913 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:27.913 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:05:27.913 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:27.913 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:05:27.913 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:27.913 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:05:27.913 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:27.913 element at address: 0x200003eff180 with size: 1.000488 MiB 00:05:27.913 associated memzone info: size: 1.000366 MiB name: RG_ring_0_2881259 00:05:27.913 element at address: 0x200003affc00 with size: 1.000488 MiB 00:05:27.913 associated memzone info: size: 1.000366 MiB name: RG_ring_1_2881259 00:05:27.913 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:05:27.913 associated memzone info: size: 1.000366 MiB name: RG_ring_4_2881259 00:05:27.913 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:05:27.913 associated memzone info: size: 1.000366 MiB name: RG_ring_5_2881259 00:05:27.913 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:05:27.913 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_2881259 00:05:27.913 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:05:27.913 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:27.913 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:05:27.913 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:27.913 element at address: 0x20001947c540 with size: 0.250488 MiB 00:05:27.913 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:27.913 element at address: 0x200003adf880 with size: 0.125488 MiB 00:05:27.913 associated memzone info: size: 0.125366 MiB name: RG_ring_2_2881259 00:05:27.913 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:05:27.913 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:27.913 element at address: 0x200027e69100 with size: 0.023743 MiB 00:05:27.913 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:27.913 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:05:27.913 associated memzone info: size: 0.015991 MiB name: RG_ring_3_2881259 00:05:27.913 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:05:27.913 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:27.913 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:05:27.913 associated memzone info: size: 0.000183 MiB name: MP_msgpool_2881259 00:05:27.913 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:05:27.913 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_2881259 00:05:27.913 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:05:27.913 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:27.913 20:14:40 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:27.913 20:14:40 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 2881259 00:05:27.913 20:14:40 dpdk_mem_utility -- common/autotest_common.sh@946 -- # '[' -z 2881259 ']' 00:05:27.913 20:14:40 dpdk_mem_utility -- common/autotest_common.sh@950 -- # kill -0 2881259 00:05:27.913 20:14:40 dpdk_mem_utility -- common/autotest_common.sh@951 -- # uname 00:05:27.913 20:14:40 dpdk_mem_utility -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:27.913 20:14:40 dpdk_mem_utility -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2881259 00:05:28.172 20:14:40 dpdk_mem_utility -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:28.172 20:14:40 dpdk_mem_utility -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:28.172 20:14:40 dpdk_mem_utility -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2881259' 00:05:28.172 killing process with pid 2881259 00:05:28.172 20:14:40 dpdk_mem_utility -- common/autotest_common.sh@965 -- # kill 2881259 00:05:28.172 20:14:40 dpdk_mem_utility -- common/autotest_common.sh@970 -- # wait 2881259 00:05:28.431 00:05:28.431 real 0m1.394s 00:05:28.431 user 0m1.458s 00:05:28.431 sys 0m0.406s 00:05:28.431 20:14:41 dpdk_mem_utility -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:28.431 20:14:41 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:28.431 ************************************ 00:05:28.431 END TEST dpdk_mem_utility 00:05:28.431 ************************************ 00:05:28.431 20:14:41 -- spdk/autotest.sh@181 -- # run_test event /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/event.sh 00:05:28.431 20:14:41 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:28.431 20:14:41 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:28.431 20:14:41 -- common/autotest_common.sh@10 -- # set +x 00:05:28.431 ************************************ 00:05:28.431 START TEST event 00:05:28.431 ************************************ 00:05:28.431 20:14:41 event -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/event.sh 00:05:28.431 * Looking for test storage... 00:05:28.431 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event 00:05:28.431 20:14:41 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/bdev/nbd_common.sh 00:05:28.431 20:14:41 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:28.431 20:14:41 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:28.431 20:14:41 event -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:05:28.431 20:14:41 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:28.431 20:14:41 event -- common/autotest_common.sh@10 -- # set +x 00:05:28.431 ************************************ 00:05:28.431 START TEST event_perf 00:05:28.431 ************************************ 00:05:28.431 20:14:41 event.event_perf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:28.691 Running I/O for 1 seconds...[2024-05-16 20:14:41.439121] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:05:28.691 [2024-05-16 20:14:41.439210] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2881554 ] 00:05:28.691 EAL: No free 2048 kB hugepages reported on node 1 00:05:28.691 [2024-05-16 20:14:41.504356] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:28.691 [2024-05-16 20:14:41.586154] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:28.691 [2024-05-16 20:14:41.586188] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:28.691 [2024-05-16 20:14:41.586204] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:28.691 [2024-05-16 20:14:41.586206] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:30.067 Running I/O for 1 seconds... 00:05:30.067 lcore 0: 200045 00:05:30.067 lcore 1: 200044 00:05:30.067 lcore 2: 200044 00:05:30.067 lcore 3: 200044 00:05:30.067 done. 00:05:30.067 00:05:30.067 real 0m1.236s 00:05:30.067 user 0m4.152s 00:05:30.067 sys 0m0.082s 00:05:30.067 20:14:42 event.event_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:30.067 20:14:42 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:30.067 ************************************ 00:05:30.067 END TEST event_perf 00:05:30.067 ************************************ 00:05:30.067 20:14:42 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:30.067 20:14:42 event -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:05:30.067 20:14:42 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:30.067 20:14:42 event -- common/autotest_common.sh@10 -- # set +x 00:05:30.067 ************************************ 00:05:30.067 START TEST event_reactor 00:05:30.067 ************************************ 00:05:30.067 20:14:42 event.event_reactor -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:30.067 [2024-05-16 20:14:42.753114] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:05:30.067 [2024-05-16 20:14:42.753182] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2881808 ] 00:05:30.067 EAL: No free 2048 kB hugepages reported on node 1 00:05:30.067 [2024-05-16 20:14:42.816404] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:30.067 [2024-05-16 20:14:42.888394] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:31.002 test_start 00:05:31.002 oneshot 00:05:31.002 tick 100 00:05:31.002 tick 100 00:05:31.002 tick 250 00:05:31.002 tick 100 00:05:31.002 tick 100 00:05:31.002 tick 250 00:05:31.002 tick 100 00:05:31.002 tick 500 00:05:31.002 tick 100 00:05:31.002 tick 100 00:05:31.002 tick 250 00:05:31.002 tick 100 00:05:31.002 tick 100 00:05:31.002 test_end 00:05:31.002 00:05:31.002 real 0m1.223s 00:05:31.002 user 0m1.143s 00:05:31.002 sys 0m0.076s 00:05:31.002 20:14:43 event.event_reactor -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:31.002 20:14:43 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:31.002 ************************************ 00:05:31.002 END TEST event_reactor 00:05:31.002 ************************************ 00:05:31.002 20:14:43 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:31.002 20:14:43 event -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:05:31.002 20:14:43 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:31.002 20:14:43 event -- common/autotest_common.sh@10 -- # set +x 00:05:31.261 ************************************ 00:05:31.261 START TEST event_reactor_perf 00:05:31.261 ************************************ 00:05:31.261 20:14:44 event.event_reactor_perf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:31.261 [2024-05-16 20:14:44.041532] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:05:31.261 [2024-05-16 20:14:44.041583] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2882056 ] 00:05:31.261 EAL: No free 2048 kB hugepages reported on node 1 00:05:31.261 [2024-05-16 20:14:44.100401] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:31.261 [2024-05-16 20:14:44.173568] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.637 test_start 00:05:32.637 test_end 00:05:32.637 Performance: 518527 events per second 00:05:32.637 00:05:32.637 real 0m1.210s 00:05:32.637 user 0m1.137s 00:05:32.637 sys 0m0.069s 00:05:32.637 20:14:45 event.event_reactor_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:32.637 20:14:45 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:32.637 ************************************ 00:05:32.637 END TEST event_reactor_perf 00:05:32.637 ************************************ 00:05:32.637 20:14:45 event -- event/event.sh@49 -- # uname -s 00:05:32.637 20:14:45 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:32.637 20:14:45 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:32.637 20:14:45 event -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:32.637 20:14:45 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:32.637 20:14:45 event -- common/autotest_common.sh@10 -- # set +x 00:05:32.637 ************************************ 00:05:32.637 START TEST event_scheduler 00:05:32.637 ************************************ 00:05:32.637 20:14:45 event.event_scheduler -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:32.637 * Looking for test storage... 00:05:32.637 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/scheduler 00:05:32.637 20:14:45 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:32.637 20:14:45 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=2882338 00:05:32.637 20:14:45 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:32.637 20:14:45 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:32.637 20:14:45 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 2882338 00:05:32.637 20:14:45 event.event_scheduler -- common/autotest_common.sh@827 -- # '[' -z 2882338 ']' 00:05:32.637 20:14:45 event.event_scheduler -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:32.637 20:14:45 event.event_scheduler -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:32.637 20:14:45 event.event_scheduler -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:32.637 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:32.637 20:14:45 event.event_scheduler -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:32.637 20:14:45 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:32.637 [2024-05-16 20:14:45.442969] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:05:32.637 [2024-05-16 20:14:45.443011] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2882338 ] 00:05:32.637 EAL: No free 2048 kB hugepages reported on node 1 00:05:32.638 [2024-05-16 20:14:45.497079] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:32.638 [2024-05-16 20:14:45.573842] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.638 [2024-05-16 20:14:45.573931] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:32.638 [2024-05-16 20:14:45.574018] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:32.638 [2024-05-16 20:14:45.574020] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:33.573 20:14:46 event.event_scheduler -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:33.573 20:14:46 event.event_scheduler -- common/autotest_common.sh@860 -- # return 0 00:05:33.574 20:14:46 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:33.574 20:14:46 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:33.574 20:14:46 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:33.574 POWER: Env isn't set yet! 00:05:33.574 POWER: Attempting to initialise ACPI cpufreq power management... 00:05:33.574 POWER: Failed to write /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:33.574 POWER: Cannot set governor of lcore 0 to userspace 00:05:33.574 POWER: Attempting to initialise PSTAT power management... 00:05:33.574 POWER: Power management governor of lcore 0 has been set to 'performance' successfully 00:05:33.574 POWER: Initialized successfully for lcore 0 power management 00:05:33.574 POWER: Power management governor of lcore 1 has been set to 'performance' successfully 00:05:33.574 POWER: Initialized successfully for lcore 1 power management 00:05:33.574 POWER: Power management governor of lcore 2 has been set to 'performance' successfully 00:05:33.574 POWER: Initialized successfully for lcore 2 power management 00:05:33.574 POWER: Power management governor of lcore 3 has been set to 'performance' successfully 00:05:33.574 POWER: Initialized successfully for lcore 3 power management 00:05:33.574 20:14:46 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:33.574 20:14:46 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:33.574 20:14:46 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:33.574 20:14:46 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:33.574 [2024-05-16 20:14:46.346678] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:33.574 20:14:46 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:33.574 20:14:46 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:33.574 20:14:46 event.event_scheduler -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:33.574 20:14:46 event.event_scheduler -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:33.574 20:14:46 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:33.574 ************************************ 00:05:33.574 START TEST scheduler_create_thread 00:05:33.574 ************************************ 00:05:33.574 20:14:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1121 -- # scheduler_create_thread 00:05:33.574 20:14:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:33.574 20:14:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:33.574 20:14:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:33.574 2 00:05:33.574 20:14:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:33.574 20:14:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:33.574 20:14:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:33.574 20:14:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:33.574 3 00:05:33.574 20:14:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:33.574 20:14:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:33.574 20:14:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:33.574 20:14:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:33.574 4 00:05:33.574 20:14:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:33.574 20:14:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:33.574 20:14:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:33.574 20:14:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:33.574 5 00:05:33.574 20:14:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:33.574 20:14:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:33.574 20:14:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:33.574 20:14:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:33.574 6 00:05:33.574 20:14:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:33.574 20:14:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:33.574 20:14:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:33.574 20:14:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:33.574 7 00:05:33.574 20:14:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:33.574 20:14:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:33.574 20:14:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:33.574 20:14:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:33.574 8 00:05:33.574 20:14:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:33.574 20:14:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:33.574 20:14:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:33.574 20:14:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:33.574 9 00:05:33.574 20:14:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:33.574 20:14:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:33.574 20:14:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:33.574 20:14:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:33.574 10 00:05:33.574 20:14:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:33.574 20:14:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:33.574 20:14:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:33.574 20:14:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:33.574 20:14:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:33.574 20:14:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:33.574 20:14:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:33.574 20:14:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:33.574 20:14:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:34.510 20:14:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:34.510 20:14:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:34.510 20:14:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:34.510 20:14:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:35.885 20:14:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:35.885 20:14:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:35.885 20:14:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:35.885 20:14:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:35.885 20:14:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:36.821 20:14:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:36.821 00:05:36.821 real 0m3.382s 00:05:36.821 user 0m0.023s 00:05:36.821 sys 0m0.005s 00:05:36.821 20:14:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:36.821 20:14:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:36.821 ************************************ 00:05:36.821 END TEST scheduler_create_thread 00:05:36.821 ************************************ 00:05:36.821 20:14:49 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:36.821 20:14:49 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 2882338 00:05:36.821 20:14:49 event.event_scheduler -- common/autotest_common.sh@946 -- # '[' -z 2882338 ']' 00:05:36.821 20:14:49 event.event_scheduler -- common/autotest_common.sh@950 -- # kill -0 2882338 00:05:36.821 20:14:49 event.event_scheduler -- common/autotest_common.sh@951 -- # uname 00:05:36.821 20:14:49 event.event_scheduler -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:36.821 20:14:49 event.event_scheduler -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2882338 00:05:37.079 20:14:49 event.event_scheduler -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:05:37.079 20:14:49 event.event_scheduler -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:05:37.079 20:14:49 event.event_scheduler -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2882338' 00:05:37.079 killing process with pid 2882338 00:05:37.079 20:14:49 event.event_scheduler -- common/autotest_common.sh@965 -- # kill 2882338 00:05:37.079 20:14:49 event.event_scheduler -- common/autotest_common.sh@970 -- # wait 2882338 00:05:37.338 [2024-05-16 20:14:50.150801] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:37.338 POWER: Power management governor of lcore 0 has been set to 'powersave' successfully 00:05:37.338 POWER: Power management of lcore 0 has exited from 'performance' mode and been set back to the original 00:05:37.338 POWER: Power management governor of lcore 1 has been set to 'powersave' successfully 00:05:37.338 POWER: Power management of lcore 1 has exited from 'performance' mode and been set back to the original 00:05:37.338 POWER: Power management governor of lcore 2 has been set to 'powersave' successfully 00:05:37.338 POWER: Power management of lcore 2 has exited from 'performance' mode and been set back to the original 00:05:37.338 POWER: Power management governor of lcore 3 has been set to 'powersave' successfully 00:05:37.338 POWER: Power management of lcore 3 has exited from 'performance' mode and been set back to the original 00:05:37.597 00:05:37.597 real 0m5.059s 00:05:37.597 user 0m10.468s 00:05:37.597 sys 0m0.334s 00:05:37.597 20:14:50 event.event_scheduler -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:37.597 20:14:50 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:37.597 ************************************ 00:05:37.597 END TEST event_scheduler 00:05:37.597 ************************************ 00:05:37.597 20:14:50 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:37.597 20:14:50 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:37.597 20:14:50 event -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:37.597 20:14:50 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:37.597 20:14:50 event -- common/autotest_common.sh@10 -- # set +x 00:05:37.597 ************************************ 00:05:37.597 START TEST app_repeat 00:05:37.597 ************************************ 00:05:37.597 20:14:50 event.app_repeat -- common/autotest_common.sh@1121 -- # app_repeat_test 00:05:37.597 20:14:50 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:37.597 20:14:50 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:37.597 20:14:50 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:37.597 20:14:50 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:37.597 20:14:50 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:37.597 20:14:50 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:37.597 20:14:50 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:37.597 20:14:50 event.app_repeat -- event/event.sh@19 -- # repeat_pid=2883302 00:05:37.597 20:14:50 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:37.597 20:14:50 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:37.597 20:14:50 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 2883302' 00:05:37.597 Process app_repeat pid: 2883302 00:05:37.597 20:14:50 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:37.597 20:14:50 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:37.597 spdk_app_start Round 0 00:05:37.597 20:14:50 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2883302 /var/tmp/spdk-nbd.sock 00:05:37.597 20:14:50 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 2883302 ']' 00:05:37.597 20:14:50 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:37.597 20:14:50 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:37.597 20:14:50 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:37.597 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:37.597 20:14:50 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:37.597 20:14:50 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:37.597 [2024-05-16 20:14:50.488796] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:05:37.597 [2024-05-16 20:14:50.488847] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2883302 ] 00:05:37.597 EAL: No free 2048 kB hugepages reported on node 1 00:05:37.597 [2024-05-16 20:14:50.550549] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:37.856 [2024-05-16 20:14:50.624502] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:37.856 [2024-05-16 20:14:50.624505] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:38.423 20:14:51 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:38.423 20:14:51 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:05:38.423 20:14:51 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:38.682 Malloc0 00:05:38.682 20:14:51 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:38.682 Malloc1 00:05:38.941 20:14:51 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:38.941 20:14:51 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:38.941 20:14:51 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:38.941 20:14:51 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:38.941 20:14:51 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:38.941 20:14:51 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:38.941 20:14:51 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:38.941 20:14:51 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:38.941 20:14:51 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:38.941 20:14:51 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:38.941 20:14:51 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:38.941 20:14:51 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:38.941 20:14:51 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:38.941 20:14:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:38.941 20:14:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:38.941 20:14:51 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:38.941 /dev/nbd0 00:05:38.941 20:14:51 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:38.941 20:14:51 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:38.941 20:14:51 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:05:38.941 20:14:51 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:05:38.941 20:14:51 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:05:38.941 20:14:51 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:05:38.941 20:14:51 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:05:38.941 20:14:51 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:05:38.941 20:14:51 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:05:38.941 20:14:51 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:05:38.941 20:14:51 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:38.941 1+0 records in 00:05:38.941 1+0 records out 00:05:38.941 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000175446 s, 23.3 MB/s 00:05:38.941 20:14:51 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:05:38.941 20:14:51 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:05:38.941 20:14:51 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:05:38.941 20:14:51 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:05:38.941 20:14:51 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:05:38.941 20:14:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:38.941 20:14:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:38.941 20:14:51 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:39.200 /dev/nbd1 00:05:39.200 20:14:52 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:39.200 20:14:52 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:39.200 20:14:52 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:05:39.200 20:14:52 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:05:39.200 20:14:52 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:05:39.200 20:14:52 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:05:39.200 20:14:52 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:05:39.200 20:14:52 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:05:39.200 20:14:52 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:05:39.200 20:14:52 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:05:39.200 20:14:52 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:39.200 1+0 records in 00:05:39.200 1+0 records out 00:05:39.200 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000183274 s, 22.3 MB/s 00:05:39.200 20:14:52 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:05:39.200 20:14:52 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:05:39.200 20:14:52 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:05:39.200 20:14:52 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:05:39.200 20:14:52 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:05:39.200 20:14:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:39.200 20:14:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:39.200 20:14:52 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:39.200 20:14:52 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:39.200 20:14:52 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:39.459 20:14:52 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:39.459 { 00:05:39.459 "nbd_device": "/dev/nbd0", 00:05:39.459 "bdev_name": "Malloc0" 00:05:39.459 }, 00:05:39.459 { 00:05:39.459 "nbd_device": "/dev/nbd1", 00:05:39.459 "bdev_name": "Malloc1" 00:05:39.459 } 00:05:39.459 ]' 00:05:39.459 20:14:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:39.459 20:14:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:39.459 { 00:05:39.459 "nbd_device": "/dev/nbd0", 00:05:39.459 "bdev_name": "Malloc0" 00:05:39.459 }, 00:05:39.459 { 00:05:39.459 "nbd_device": "/dev/nbd1", 00:05:39.459 "bdev_name": "Malloc1" 00:05:39.459 } 00:05:39.459 ]' 00:05:39.459 20:14:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:39.459 /dev/nbd1' 00:05:39.459 20:14:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:39.459 /dev/nbd1' 00:05:39.459 20:14:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:39.459 20:14:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:39.459 20:14:52 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:39.459 20:14:52 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:39.459 20:14:52 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:39.459 20:14:52 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:39.459 20:14:52 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:39.459 20:14:52 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:39.459 20:14:52 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:39.459 20:14:52 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:05:39.459 20:14:52 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:39.459 20:14:52 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:39.459 256+0 records in 00:05:39.459 256+0 records out 00:05:39.459 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00975072 s, 108 MB/s 00:05:39.459 20:14:52 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:39.459 20:14:52 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:39.459 256+0 records in 00:05:39.459 256+0 records out 00:05:39.459 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0131024 s, 80.0 MB/s 00:05:39.459 20:14:52 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:39.459 20:14:52 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:39.459 256+0 records in 00:05:39.459 256+0 records out 00:05:39.459 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0142145 s, 73.8 MB/s 00:05:39.459 20:14:52 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:39.459 20:14:52 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:39.459 20:14:52 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:39.459 20:14:52 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:39.459 20:14:52 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:05:39.459 20:14:52 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:39.459 20:14:52 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:39.459 20:14:52 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:39.459 20:14:52 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:39.459 20:14:52 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:39.459 20:14:52 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:39.459 20:14:52 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:05:39.459 20:14:52 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:39.459 20:14:52 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:39.459 20:14:52 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:39.459 20:14:52 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:39.459 20:14:52 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:39.459 20:14:52 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:39.459 20:14:52 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:39.718 20:14:52 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:39.718 20:14:52 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:39.718 20:14:52 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:39.718 20:14:52 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:39.718 20:14:52 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:39.718 20:14:52 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:39.718 20:14:52 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:39.718 20:14:52 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:39.718 20:14:52 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:39.718 20:14:52 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:39.977 20:14:52 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:39.977 20:14:52 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:39.977 20:14:52 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:39.977 20:14:52 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:39.977 20:14:52 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:39.977 20:14:52 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:39.977 20:14:52 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:39.977 20:14:52 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:39.977 20:14:52 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:39.977 20:14:52 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:39.977 20:14:52 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:39.977 20:14:52 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:39.977 20:14:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:39.977 20:14:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:39.977 20:14:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:39.977 20:14:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:39.977 20:14:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:39.977 20:14:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:39.977 20:14:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:39.977 20:14:52 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:39.977 20:14:52 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:39.977 20:14:52 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:39.977 20:14:52 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:39.977 20:14:52 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:40.235 20:14:53 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:40.494 [2024-05-16 20:14:53.334485] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:40.494 [2024-05-16 20:14:53.402834] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:40.494 [2024-05-16 20:14:53.402836] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.494 [2024-05-16 20:14:53.444070] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:40.494 [2024-05-16 20:14:53.444110] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:43.780 20:14:56 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:43.780 20:14:56 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:43.780 spdk_app_start Round 1 00:05:43.780 20:14:56 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2883302 /var/tmp/spdk-nbd.sock 00:05:43.780 20:14:56 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 2883302 ']' 00:05:43.780 20:14:56 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:43.780 20:14:56 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:43.780 20:14:56 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:43.780 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:43.780 20:14:56 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:43.780 20:14:56 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:43.780 20:14:56 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:43.780 20:14:56 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:05:43.780 20:14:56 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:43.780 Malloc0 00:05:43.780 20:14:56 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:43.780 Malloc1 00:05:43.780 20:14:56 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:43.780 20:14:56 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:43.780 20:14:56 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:43.780 20:14:56 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:43.780 20:14:56 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:43.780 20:14:56 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:43.780 20:14:56 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:43.780 20:14:56 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:43.780 20:14:56 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:43.780 20:14:56 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:43.780 20:14:56 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:43.780 20:14:56 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:43.780 20:14:56 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:43.780 20:14:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:43.780 20:14:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:43.780 20:14:56 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:44.038 /dev/nbd0 00:05:44.038 20:14:56 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:44.038 20:14:56 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:44.038 20:14:56 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:05:44.038 20:14:56 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:05:44.038 20:14:56 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:05:44.038 20:14:56 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:05:44.038 20:14:56 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:05:44.038 20:14:56 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:05:44.038 20:14:56 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:05:44.038 20:14:56 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:05:44.038 20:14:56 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:44.038 1+0 records in 00:05:44.038 1+0 records out 00:05:44.038 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000266701 s, 15.4 MB/s 00:05:44.038 20:14:56 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:05:44.038 20:14:56 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:05:44.038 20:14:56 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:05:44.038 20:14:56 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:05:44.038 20:14:56 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:05:44.038 20:14:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:44.038 20:14:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:44.038 20:14:56 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:44.297 /dev/nbd1 00:05:44.297 20:14:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:44.297 20:14:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:44.297 20:14:57 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:05:44.297 20:14:57 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:05:44.297 20:14:57 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:05:44.297 20:14:57 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:05:44.297 20:14:57 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:05:44.297 20:14:57 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:05:44.297 20:14:57 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:05:44.297 20:14:57 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:05:44.297 20:14:57 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:44.297 1+0 records in 00:05:44.297 1+0 records out 00:05:44.297 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000194911 s, 21.0 MB/s 00:05:44.297 20:14:57 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:05:44.297 20:14:57 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:05:44.297 20:14:57 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:05:44.297 20:14:57 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:05:44.297 20:14:57 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:05:44.297 20:14:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:44.297 20:14:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:44.297 20:14:57 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:44.297 20:14:57 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:44.297 20:14:57 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:44.297 20:14:57 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:44.297 { 00:05:44.297 "nbd_device": "/dev/nbd0", 00:05:44.297 "bdev_name": "Malloc0" 00:05:44.297 }, 00:05:44.297 { 00:05:44.297 "nbd_device": "/dev/nbd1", 00:05:44.297 "bdev_name": "Malloc1" 00:05:44.297 } 00:05:44.297 ]' 00:05:44.297 20:14:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:44.297 { 00:05:44.297 "nbd_device": "/dev/nbd0", 00:05:44.297 "bdev_name": "Malloc0" 00:05:44.297 }, 00:05:44.297 { 00:05:44.297 "nbd_device": "/dev/nbd1", 00:05:44.297 "bdev_name": "Malloc1" 00:05:44.297 } 00:05:44.297 ]' 00:05:44.297 20:14:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:44.556 20:14:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:44.556 /dev/nbd1' 00:05:44.556 20:14:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:44.556 /dev/nbd1' 00:05:44.556 20:14:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:44.556 20:14:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:44.556 20:14:57 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:44.556 20:14:57 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:44.556 20:14:57 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:44.556 20:14:57 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:44.556 20:14:57 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:44.556 20:14:57 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:44.556 20:14:57 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:44.556 20:14:57 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:05:44.556 20:14:57 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:44.556 20:14:57 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:44.556 256+0 records in 00:05:44.556 256+0 records out 00:05:44.556 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0103644 s, 101 MB/s 00:05:44.556 20:14:57 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:44.556 20:14:57 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:44.556 256+0 records in 00:05:44.556 256+0 records out 00:05:44.556 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0130981 s, 80.1 MB/s 00:05:44.556 20:14:57 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:44.556 20:14:57 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:44.556 256+0 records in 00:05:44.556 256+0 records out 00:05:44.556 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0138304 s, 75.8 MB/s 00:05:44.556 20:14:57 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:44.556 20:14:57 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:44.556 20:14:57 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:44.556 20:14:57 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:44.556 20:14:57 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:05:44.556 20:14:57 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:44.556 20:14:57 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:44.556 20:14:57 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:44.556 20:14:57 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:44.556 20:14:57 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:44.556 20:14:57 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:44.556 20:14:57 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:05:44.556 20:14:57 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:44.556 20:14:57 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:44.556 20:14:57 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:44.556 20:14:57 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:44.556 20:14:57 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:44.556 20:14:57 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:44.556 20:14:57 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:44.556 20:14:57 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:44.815 20:14:57 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:44.815 20:14:57 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:44.815 20:14:57 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:44.815 20:14:57 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:44.815 20:14:57 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:44.815 20:14:57 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:44.815 20:14:57 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:44.815 20:14:57 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:44.815 20:14:57 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:44.815 20:14:57 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:44.815 20:14:57 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:44.815 20:14:57 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:44.815 20:14:57 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:44.815 20:14:57 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:44.815 20:14:57 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:44.815 20:14:57 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:44.815 20:14:57 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:44.815 20:14:57 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:44.815 20:14:57 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:44.815 20:14:57 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:45.073 20:14:57 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:45.073 20:14:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:45.073 20:14:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:45.073 20:14:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:45.073 20:14:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:45.073 20:14:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:45.073 20:14:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:45.073 20:14:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:45.073 20:14:57 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:45.073 20:14:57 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:45.073 20:14:57 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:45.073 20:14:57 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:45.073 20:14:57 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:45.332 20:14:58 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:45.591 [2024-05-16 20:14:58.341368] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:45.591 [2024-05-16 20:14:58.410658] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:45.591 [2024-05-16 20:14:58.410661] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.591 [2024-05-16 20:14:58.452770] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:45.591 [2024-05-16 20:14:58.452812] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:48.877 20:15:01 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:48.877 20:15:01 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:48.877 spdk_app_start Round 2 00:05:48.877 20:15:01 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2883302 /var/tmp/spdk-nbd.sock 00:05:48.877 20:15:01 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 2883302 ']' 00:05:48.877 20:15:01 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:48.877 20:15:01 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:48.877 20:15:01 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:48.877 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:48.877 20:15:01 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:48.877 20:15:01 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:48.877 20:15:01 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:48.877 20:15:01 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:05:48.877 20:15:01 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:48.877 Malloc0 00:05:48.877 20:15:01 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:48.877 Malloc1 00:05:48.877 20:15:01 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:48.877 20:15:01 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:48.877 20:15:01 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:48.877 20:15:01 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:48.877 20:15:01 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:48.877 20:15:01 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:48.877 20:15:01 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:48.877 20:15:01 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:48.877 20:15:01 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:48.877 20:15:01 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:48.877 20:15:01 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:48.877 20:15:01 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:48.877 20:15:01 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:48.877 20:15:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:48.877 20:15:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:48.877 20:15:01 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:49.136 /dev/nbd0 00:05:49.136 20:15:01 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:49.136 20:15:01 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:49.136 20:15:01 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:05:49.136 20:15:01 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:05:49.136 20:15:01 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:05:49.136 20:15:01 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:05:49.136 20:15:01 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:05:49.136 20:15:01 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:05:49.136 20:15:01 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:05:49.136 20:15:01 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:05:49.136 20:15:01 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:49.136 1+0 records in 00:05:49.136 1+0 records out 00:05:49.136 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000171599 s, 23.9 MB/s 00:05:49.136 20:15:01 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:05:49.136 20:15:01 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:05:49.136 20:15:01 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:05:49.136 20:15:01 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:05:49.136 20:15:01 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:05:49.136 20:15:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:49.136 20:15:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:49.136 20:15:01 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:49.136 /dev/nbd1 00:05:49.136 20:15:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:49.137 20:15:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:49.137 20:15:02 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:05:49.137 20:15:02 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:05:49.137 20:15:02 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:05:49.137 20:15:02 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:05:49.137 20:15:02 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:05:49.137 20:15:02 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:05:49.137 20:15:02 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:05:49.137 20:15:02 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:05:49.137 20:15:02 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:49.137 1+0 records in 00:05:49.137 1+0 records out 00:05:49.137 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000236228 s, 17.3 MB/s 00:05:49.137 20:15:02 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:05:49.137 20:15:02 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:05:49.137 20:15:02 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:05:49.137 20:15:02 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:05:49.137 20:15:02 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:05:49.137 20:15:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:49.137 20:15:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:49.137 20:15:02 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:49.137 20:15:02 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:49.137 20:15:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:49.399 20:15:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:49.399 { 00:05:49.399 "nbd_device": "/dev/nbd0", 00:05:49.399 "bdev_name": "Malloc0" 00:05:49.399 }, 00:05:49.399 { 00:05:49.399 "nbd_device": "/dev/nbd1", 00:05:49.399 "bdev_name": "Malloc1" 00:05:49.399 } 00:05:49.399 ]' 00:05:49.399 20:15:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:49.399 { 00:05:49.399 "nbd_device": "/dev/nbd0", 00:05:49.400 "bdev_name": "Malloc0" 00:05:49.400 }, 00:05:49.400 { 00:05:49.400 "nbd_device": "/dev/nbd1", 00:05:49.400 "bdev_name": "Malloc1" 00:05:49.400 } 00:05:49.400 ]' 00:05:49.400 20:15:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:49.400 20:15:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:49.400 /dev/nbd1' 00:05:49.400 20:15:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:49.400 /dev/nbd1' 00:05:49.400 20:15:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:49.400 20:15:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:49.400 20:15:02 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:49.400 20:15:02 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:49.400 20:15:02 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:49.400 20:15:02 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:49.400 20:15:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:49.400 20:15:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:49.400 20:15:02 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:49.400 20:15:02 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:05:49.400 20:15:02 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:49.400 20:15:02 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:49.400 256+0 records in 00:05:49.400 256+0 records out 00:05:49.400 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0103404 s, 101 MB/s 00:05:49.400 20:15:02 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:49.400 20:15:02 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:49.400 256+0 records in 00:05:49.400 256+0 records out 00:05:49.400 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0127888 s, 82.0 MB/s 00:05:49.400 20:15:02 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:49.400 20:15:02 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:49.659 256+0 records in 00:05:49.659 256+0 records out 00:05:49.659 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0148864 s, 70.4 MB/s 00:05:49.659 20:15:02 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:49.659 20:15:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:49.659 20:15:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:49.659 20:15:02 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:49.659 20:15:02 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:05:49.659 20:15:02 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:49.659 20:15:02 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:49.659 20:15:02 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:49.659 20:15:02 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:49.659 20:15:02 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:49.659 20:15:02 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:49.659 20:15:02 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:05:49.659 20:15:02 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:49.659 20:15:02 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:49.659 20:15:02 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:49.659 20:15:02 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:49.659 20:15:02 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:49.659 20:15:02 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:49.659 20:15:02 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:49.659 20:15:02 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:49.659 20:15:02 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:49.659 20:15:02 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:49.659 20:15:02 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:49.659 20:15:02 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:49.659 20:15:02 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:49.659 20:15:02 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:49.659 20:15:02 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:49.659 20:15:02 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:49.659 20:15:02 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:49.918 20:15:02 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:49.918 20:15:02 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:49.918 20:15:02 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:49.918 20:15:02 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:49.918 20:15:02 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:49.918 20:15:02 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:49.918 20:15:02 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:49.918 20:15:02 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:49.918 20:15:02 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:49.918 20:15:02 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:49.918 20:15:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:50.177 20:15:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:50.177 20:15:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:50.177 20:15:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:50.177 20:15:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:50.177 20:15:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:50.177 20:15:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:50.177 20:15:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:50.177 20:15:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:50.177 20:15:03 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:50.177 20:15:03 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:50.177 20:15:03 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:50.177 20:15:03 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:50.177 20:15:03 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:50.436 20:15:03 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:50.436 [2024-05-16 20:15:03.412956] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:50.695 [2024-05-16 20:15:03.482634] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:50.695 [2024-05-16 20:15:03.482636] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.695 [2024-05-16 20:15:03.524132] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:50.695 [2024-05-16 20:15:03.524172] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:53.980 20:15:06 event.app_repeat -- event/event.sh@38 -- # waitforlisten 2883302 /var/tmp/spdk-nbd.sock 00:05:53.980 20:15:06 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 2883302 ']' 00:05:53.980 20:15:06 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:53.980 20:15:06 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:53.980 20:15:06 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:53.980 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:53.980 20:15:06 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:53.980 20:15:06 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:53.980 20:15:06 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:53.980 20:15:06 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:05:53.980 20:15:06 event.app_repeat -- event/event.sh@39 -- # killprocess 2883302 00:05:53.980 20:15:06 event.app_repeat -- common/autotest_common.sh@946 -- # '[' -z 2883302 ']' 00:05:53.980 20:15:06 event.app_repeat -- common/autotest_common.sh@950 -- # kill -0 2883302 00:05:53.980 20:15:06 event.app_repeat -- common/autotest_common.sh@951 -- # uname 00:05:53.980 20:15:06 event.app_repeat -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:53.980 20:15:06 event.app_repeat -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2883302 00:05:53.980 20:15:06 event.app_repeat -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:53.980 20:15:06 event.app_repeat -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:53.980 20:15:06 event.app_repeat -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2883302' 00:05:53.980 killing process with pid 2883302 00:05:53.980 20:15:06 event.app_repeat -- common/autotest_common.sh@965 -- # kill 2883302 00:05:53.980 20:15:06 event.app_repeat -- common/autotest_common.sh@970 -- # wait 2883302 00:05:53.980 spdk_app_start is called in Round 0. 00:05:53.980 Shutdown signal received, stop current app iteration 00:05:53.980 Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 reinitialization... 00:05:53.980 spdk_app_start is called in Round 1. 00:05:53.980 Shutdown signal received, stop current app iteration 00:05:53.980 Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 reinitialization... 00:05:53.980 spdk_app_start is called in Round 2. 00:05:53.980 Shutdown signal received, stop current app iteration 00:05:53.980 Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 reinitialization... 00:05:53.980 spdk_app_start is called in Round 3. 00:05:53.980 Shutdown signal received, stop current app iteration 00:05:53.980 20:15:06 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:53.980 20:15:06 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:53.980 00:05:53.980 real 0m16.168s 00:05:53.980 user 0m34.994s 00:05:53.980 sys 0m2.349s 00:05:53.980 20:15:06 event.app_repeat -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:53.980 20:15:06 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:53.980 ************************************ 00:05:53.980 END TEST app_repeat 00:05:53.980 ************************************ 00:05:53.980 20:15:06 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:53.980 20:15:06 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:53.980 20:15:06 event -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:53.980 20:15:06 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:53.980 20:15:06 event -- common/autotest_common.sh@10 -- # set +x 00:05:53.980 ************************************ 00:05:53.980 START TEST cpu_locks 00:05:53.980 ************************************ 00:05:53.981 20:15:06 event.cpu_locks -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:53.981 * Looking for test storage... 00:05:53.981 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event 00:05:53.981 20:15:06 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:53.981 20:15:06 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:53.981 20:15:06 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:53.981 20:15:06 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:53.981 20:15:06 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:53.981 20:15:06 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:53.981 20:15:06 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:53.981 ************************************ 00:05:53.981 START TEST default_locks 00:05:53.981 ************************************ 00:05:53.981 20:15:06 event.cpu_locks.default_locks -- common/autotest_common.sh@1121 -- # default_locks 00:05:53.981 20:15:06 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=2886791 00:05:53.981 20:15:06 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 2886791 00:05:53.981 20:15:06 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:53.981 20:15:06 event.cpu_locks.default_locks -- common/autotest_common.sh@827 -- # '[' -z 2886791 ']' 00:05:53.981 20:15:06 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:53.981 20:15:06 event.cpu_locks.default_locks -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:53.981 20:15:06 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:53.981 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:53.981 20:15:06 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:53.981 20:15:06 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:53.981 [2024-05-16 20:15:06.864733] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:05:53.981 [2024-05-16 20:15:06.864778] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2886791 ] 00:05:53.981 EAL: No free 2048 kB hugepages reported on node 1 00:05:53.981 [2024-05-16 20:15:06.923819] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:54.239 [2024-05-16 20:15:07.005225] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.807 20:15:07 event.cpu_locks.default_locks -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:54.807 20:15:07 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # return 0 00:05:54.807 20:15:07 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 2886791 00:05:54.807 20:15:07 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:54.807 20:15:07 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 2886791 00:05:55.067 lslocks: write error 00:05:55.067 20:15:07 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 2886791 00:05:55.067 20:15:07 event.cpu_locks.default_locks -- common/autotest_common.sh@946 -- # '[' -z 2886791 ']' 00:05:55.067 20:15:07 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # kill -0 2886791 00:05:55.067 20:15:07 event.cpu_locks.default_locks -- common/autotest_common.sh@951 -- # uname 00:05:55.067 20:15:07 event.cpu_locks.default_locks -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:55.067 20:15:07 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2886791 00:05:55.067 20:15:07 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:55.067 20:15:07 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:55.067 20:15:07 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2886791' 00:05:55.067 killing process with pid 2886791 00:05:55.067 20:15:07 event.cpu_locks.default_locks -- common/autotest_common.sh@965 -- # kill 2886791 00:05:55.067 20:15:07 event.cpu_locks.default_locks -- common/autotest_common.sh@970 -- # wait 2886791 00:05:55.326 20:15:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 2886791 00:05:55.326 20:15:08 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:05:55.326 20:15:08 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 2886791 00:05:55.326 20:15:08 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:05:55.326 20:15:08 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:55.326 20:15:08 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:05:55.326 20:15:08 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:55.326 20:15:08 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 2886791 00:05:55.326 20:15:08 event.cpu_locks.default_locks -- common/autotest_common.sh@827 -- # '[' -z 2886791 ']' 00:05:55.326 20:15:08 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:55.326 20:15:08 event.cpu_locks.default_locks -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:55.326 20:15:08 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:55.326 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:55.326 20:15:08 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:55.326 20:15:08 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:55.326 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 842: kill: (2886791) - No such process 00:05:55.326 ERROR: process (pid: 2886791) is no longer running 00:05:55.326 20:15:08 event.cpu_locks.default_locks -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:55.326 20:15:08 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # return 1 00:05:55.326 20:15:08 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:05:55.326 20:15:08 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:55.326 20:15:08 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:55.326 20:15:08 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:55.326 20:15:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:55.326 20:15:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:55.326 20:15:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:55.326 20:15:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:55.326 00:05:55.326 real 0m1.369s 00:05:55.326 user 0m1.444s 00:05:55.326 sys 0m0.426s 00:05:55.326 20:15:08 event.cpu_locks.default_locks -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:55.326 20:15:08 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:55.326 ************************************ 00:05:55.326 END TEST default_locks 00:05:55.326 ************************************ 00:05:55.326 20:15:08 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:55.326 20:15:08 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:55.326 20:15:08 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:55.326 20:15:08 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:55.326 ************************************ 00:05:55.326 START TEST default_locks_via_rpc 00:05:55.326 ************************************ 00:05:55.326 20:15:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1121 -- # default_locks_via_rpc 00:05:55.326 20:15:08 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=2887059 00:05:55.326 20:15:08 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 2887059 00:05:55.326 20:15:08 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:55.326 20:15:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 2887059 ']' 00:05:55.326 20:15:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:55.326 20:15:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:55.326 20:15:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:55.326 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:55.326 20:15:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:55.326 20:15:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:55.326 [2024-05-16 20:15:08.303521] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:05:55.326 [2024-05-16 20:15:08.303563] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2887059 ] 00:05:55.585 EAL: No free 2048 kB hugepages reported on node 1 00:05:55.585 [2024-05-16 20:15:08.362117] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:55.585 [2024-05-16 20:15:08.442284] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.153 20:15:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:56.153 20:15:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:05:56.153 20:15:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:56.153 20:15:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:56.153 20:15:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:56.153 20:15:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:56.153 20:15:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:56.153 20:15:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:56.153 20:15:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:56.153 20:15:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:56.153 20:15:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:56.153 20:15:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:56.153 20:15:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:56.153 20:15:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:56.153 20:15:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 2887059 00:05:56.153 20:15:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:56.153 20:15:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 2887059 00:05:56.722 20:15:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 2887059 00:05:56.722 20:15:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@946 -- # '[' -z 2887059 ']' 00:05:56.722 20:15:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # kill -0 2887059 00:05:56.722 20:15:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@951 -- # uname 00:05:56.722 20:15:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:56.722 20:15:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2887059 00:05:56.722 20:15:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:56.722 20:15:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:56.722 20:15:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2887059' 00:05:56.722 killing process with pid 2887059 00:05:56.722 20:15:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@965 -- # kill 2887059 00:05:56.722 20:15:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@970 -- # wait 2887059 00:05:56.981 00:05:56.981 real 0m1.604s 00:05:56.981 user 0m1.692s 00:05:56.981 sys 0m0.521s 00:05:56.981 20:15:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:56.981 20:15:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:56.981 ************************************ 00:05:56.981 END TEST default_locks_via_rpc 00:05:56.981 ************************************ 00:05:56.981 20:15:09 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:56.981 20:15:09 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:56.981 20:15:09 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:56.981 20:15:09 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:56.981 ************************************ 00:05:56.981 START TEST non_locking_app_on_locked_coremask 00:05:56.981 ************************************ 00:05:56.981 20:15:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1121 -- # non_locking_app_on_locked_coremask 00:05:56.981 20:15:09 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=2887326 00:05:56.981 20:15:09 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 2887326 /var/tmp/spdk.sock 00:05:56.981 20:15:09 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:56.981 20:15:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 2887326 ']' 00:05:56.981 20:15:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:56.981 20:15:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:56.981 20:15:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:56.981 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:56.981 20:15:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:56.981 20:15:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:57.239 [2024-05-16 20:15:09.978982] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:05:57.239 [2024-05-16 20:15:09.979020] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2887326 ] 00:05:57.239 EAL: No free 2048 kB hugepages reported on node 1 00:05:57.239 [2024-05-16 20:15:10.040607] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:57.239 [2024-05-16 20:15:10.123750] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.804 20:15:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:57.805 20:15:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 0 00:05:57.805 20:15:10 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=2887553 00:05:57.805 20:15:10 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 2887553 /var/tmp/spdk2.sock 00:05:57.805 20:15:10 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:57.805 20:15:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 2887553 ']' 00:05:57.805 20:15:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:57.805 20:15:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:57.805 20:15:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:57.805 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:57.805 20:15:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:57.805 20:15:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:58.063 [2024-05-16 20:15:10.825842] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:05:58.063 [2024-05-16 20:15:10.825893] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2887553 ] 00:05:58.063 EAL: No free 2048 kB hugepages reported on node 1 00:05:58.063 [2024-05-16 20:15:10.906317] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:58.063 [2024-05-16 20:15:10.906345] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:58.321 [2024-05-16 20:15:11.057784] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.888 20:15:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:58.888 20:15:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 0 00:05:58.888 20:15:11 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 2887326 00:05:58.888 20:15:11 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2887326 00:05:58.888 20:15:11 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:59.147 lslocks: write error 00:05:59.147 20:15:12 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 2887326 00:05:59.147 20:15:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@946 -- # '[' -z 2887326 ']' 00:05:59.147 20:15:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # kill -0 2887326 00:05:59.147 20:15:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # uname 00:05:59.147 20:15:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:59.147 20:15:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2887326 00:05:59.147 20:15:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:59.147 20:15:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:59.147 20:15:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2887326' 00:05:59.147 killing process with pid 2887326 00:05:59.147 20:15:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # kill 2887326 00:05:59.147 20:15:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # wait 2887326 00:05:59.715 20:15:12 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 2887553 00:05:59.715 20:15:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@946 -- # '[' -z 2887553 ']' 00:05:59.715 20:15:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # kill -0 2887553 00:05:59.715 20:15:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # uname 00:05:59.715 20:15:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:59.715 20:15:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2887553 00:05:59.974 20:15:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:59.974 20:15:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:59.974 20:15:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2887553' 00:05:59.974 killing process with pid 2887553 00:05:59.974 20:15:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # kill 2887553 00:05:59.974 20:15:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # wait 2887553 00:06:00.233 00:06:00.233 real 0m3.108s 00:06:00.233 user 0m3.315s 00:06:00.233 sys 0m0.901s 00:06:00.233 20:15:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:00.233 20:15:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:00.233 ************************************ 00:06:00.233 END TEST non_locking_app_on_locked_coremask 00:06:00.233 ************************************ 00:06:00.233 20:15:13 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:00.233 20:15:13 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:00.233 20:15:13 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:00.233 20:15:13 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:00.233 ************************************ 00:06:00.233 START TEST locking_app_on_unlocked_coremask 00:06:00.233 ************************************ 00:06:00.233 20:15:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1121 -- # locking_app_on_unlocked_coremask 00:06:00.233 20:15:13 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=2887832 00:06:00.233 20:15:13 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:00.233 20:15:13 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 2887832 /var/tmp/spdk.sock 00:06:00.233 20:15:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@827 -- # '[' -z 2887832 ']' 00:06:00.233 20:15:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:00.233 20:15:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:00.233 20:15:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:00.233 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:00.233 20:15:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:00.233 20:15:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:00.233 [2024-05-16 20:15:13.146508] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:06:00.233 [2024-05-16 20:15:13.146552] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2887832 ] 00:06:00.233 EAL: No free 2048 kB hugepages reported on node 1 00:06:00.233 [2024-05-16 20:15:13.206352] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:00.233 [2024-05-16 20:15:13.206380] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:00.491 [2024-05-16 20:15:13.278215] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.062 20:15:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:01.062 20:15:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # return 0 00:06:01.062 20:15:13 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=2888060 00:06:01.062 20:15:13 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 2888060 /var/tmp/spdk2.sock 00:06:01.062 20:15:13 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:01.062 20:15:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@827 -- # '[' -z 2888060 ']' 00:06:01.062 20:15:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:01.062 20:15:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:01.062 20:15:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:01.062 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:01.062 20:15:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:01.062 20:15:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:01.062 [2024-05-16 20:15:13.998786] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:06:01.062 [2024-05-16 20:15:13.998834] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2888060 ] 00:06:01.062 EAL: No free 2048 kB hugepages reported on node 1 00:06:01.321 [2024-05-16 20:15:14.078288] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.321 [2024-05-16 20:15:14.227483] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.888 20:15:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:01.888 20:15:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # return 0 00:06:01.888 20:15:14 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 2888060 00:06:01.888 20:15:14 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2888060 00:06:01.888 20:15:14 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:02.456 lslocks: write error 00:06:02.456 20:15:15 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 2887832 00:06:02.456 20:15:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@946 -- # '[' -z 2887832 ']' 00:06:02.456 20:15:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # kill -0 2887832 00:06:02.456 20:15:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # uname 00:06:02.456 20:15:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:02.456 20:15:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2887832 00:06:02.456 20:15:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:02.456 20:15:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:02.456 20:15:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2887832' 00:06:02.456 killing process with pid 2887832 00:06:02.456 20:15:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@965 -- # kill 2887832 00:06:02.456 20:15:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # wait 2887832 00:06:03.394 20:15:16 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 2888060 00:06:03.394 20:15:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@946 -- # '[' -z 2888060 ']' 00:06:03.394 20:15:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # kill -0 2888060 00:06:03.394 20:15:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # uname 00:06:03.394 20:15:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:03.394 20:15:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2888060 00:06:03.394 20:15:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:03.394 20:15:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:03.394 20:15:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2888060' 00:06:03.394 killing process with pid 2888060 00:06:03.394 20:15:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@965 -- # kill 2888060 00:06:03.394 20:15:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # wait 2888060 00:06:03.394 00:06:03.394 real 0m3.271s 00:06:03.394 user 0m3.508s 00:06:03.394 sys 0m0.950s 00:06:03.394 20:15:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:03.394 20:15:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:03.394 ************************************ 00:06:03.394 END TEST locking_app_on_unlocked_coremask 00:06:03.394 ************************************ 00:06:03.653 20:15:16 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:03.653 20:15:16 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:03.653 20:15:16 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:03.653 20:15:16 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:03.653 ************************************ 00:06:03.653 START TEST locking_app_on_locked_coremask 00:06:03.653 ************************************ 00:06:03.653 20:15:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1121 -- # locking_app_on_locked_coremask 00:06:03.653 20:15:16 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=2888548 00:06:03.653 20:15:16 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 2888548 /var/tmp/spdk.sock 00:06:03.653 20:15:16 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:03.653 20:15:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 2888548 ']' 00:06:03.653 20:15:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:03.653 20:15:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:03.653 20:15:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:03.653 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:03.653 20:15:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:03.653 20:15:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:03.653 [2024-05-16 20:15:16.496769] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:06:03.653 [2024-05-16 20:15:16.496809] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2888548 ] 00:06:03.653 EAL: No free 2048 kB hugepages reported on node 1 00:06:03.653 [2024-05-16 20:15:16.555851] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:03.653 [2024-05-16 20:15:16.627826] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.684 20:15:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:04.684 20:15:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 0 00:06:04.684 20:15:17 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:04.684 20:15:17 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=2888594 00:06:04.684 20:15:17 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 2888594 /var/tmp/spdk2.sock 00:06:04.684 20:15:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:06:04.684 20:15:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 2888594 /var/tmp/spdk2.sock 00:06:04.684 20:15:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:04.684 20:15:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:04.684 20:15:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:04.684 20:15:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:04.684 20:15:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 2888594 /var/tmp/spdk2.sock 00:06:04.684 20:15:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 2888594 ']' 00:06:04.684 20:15:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:04.684 20:15:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:04.684 20:15:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:04.684 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:04.684 20:15:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:04.684 20:15:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:04.684 [2024-05-16 20:15:17.320456] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:06:04.684 [2024-05-16 20:15:17.320503] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2888594 ] 00:06:04.684 EAL: No free 2048 kB hugepages reported on node 1 00:06:04.684 [2024-05-16 20:15:17.397681] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 2888548 has claimed it. 00:06:04.684 [2024-05-16 20:15:17.397711] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:05.255 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 842: kill: (2888594) - No such process 00:06:05.255 ERROR: process (pid: 2888594) is no longer running 00:06:05.255 20:15:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:05.255 20:15:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 1 00:06:05.255 20:15:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:06:05.255 20:15:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:05.255 20:15:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:05.255 20:15:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:05.255 20:15:17 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 2888548 00:06:05.255 20:15:17 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2888548 00:06:05.255 20:15:17 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:05.513 lslocks: write error 00:06:05.513 20:15:18 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 2888548 00:06:05.513 20:15:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@946 -- # '[' -z 2888548 ']' 00:06:05.513 20:15:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # kill -0 2888548 00:06:05.513 20:15:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # uname 00:06:05.513 20:15:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:05.513 20:15:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2888548 00:06:05.513 20:15:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:05.513 20:15:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:05.513 20:15:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2888548' 00:06:05.513 killing process with pid 2888548 00:06:05.513 20:15:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # kill 2888548 00:06:05.513 20:15:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # wait 2888548 00:06:05.772 00:06:05.772 real 0m2.314s 00:06:05.772 user 0m2.554s 00:06:05.772 sys 0m0.628s 00:06:05.772 20:15:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:05.772 20:15:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:05.772 ************************************ 00:06:05.772 END TEST locking_app_on_locked_coremask 00:06:05.772 ************************************ 00:06:06.031 20:15:18 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:06.031 20:15:18 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:06.031 20:15:18 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:06.031 20:15:18 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:06.031 ************************************ 00:06:06.031 START TEST locking_overlapped_coremask 00:06:06.031 ************************************ 00:06:06.031 20:15:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1121 -- # locking_overlapped_coremask 00:06:06.031 20:15:18 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=2888958 00:06:06.031 20:15:18 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 2888958 /var/tmp/spdk.sock 00:06:06.031 20:15:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@827 -- # '[' -z 2888958 ']' 00:06:06.031 20:15:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:06.031 20:15:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:06.031 20:15:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:06.031 20:15:18 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:06:06.031 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:06.031 20:15:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:06.031 20:15:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:06.031 [2024-05-16 20:15:18.877209] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:06:06.031 [2024-05-16 20:15:18.877247] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2888958 ] 00:06:06.031 EAL: No free 2048 kB hugepages reported on node 1 00:06:06.031 [2024-05-16 20:15:18.937387] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:06.031 [2024-05-16 20:15:19.019382] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:06.031 [2024-05-16 20:15:19.019494] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:06.031 [2024-05-16 20:15:19.019496] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.970 20:15:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:06.970 20:15:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # return 0 00:06:06.970 20:15:19 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=2889070 00:06:06.970 20:15:19 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 2889070 /var/tmp/spdk2.sock 00:06:06.970 20:15:19 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:06.970 20:15:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:06:06.970 20:15:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 2889070 /var/tmp/spdk2.sock 00:06:06.970 20:15:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:06.970 20:15:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:06.970 20:15:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:06.970 20:15:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:06.970 20:15:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 2889070 /var/tmp/spdk2.sock 00:06:06.970 20:15:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@827 -- # '[' -z 2889070 ']' 00:06:06.970 20:15:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:06.970 20:15:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:06.970 20:15:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:06.970 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:06.970 20:15:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:06.970 20:15:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:06.970 [2024-05-16 20:15:19.728482] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:06:06.970 [2024-05-16 20:15:19.728527] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2889070 ] 00:06:06.971 EAL: No free 2048 kB hugepages reported on node 1 00:06:06.971 [2024-05-16 20:15:19.810669] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2888958 has claimed it. 00:06:06.971 [2024-05-16 20:15:19.810706] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:07.538 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 842: kill: (2889070) - No such process 00:06:07.538 ERROR: process (pid: 2889070) is no longer running 00:06:07.538 20:15:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:07.538 20:15:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # return 1 00:06:07.538 20:15:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:06:07.538 20:15:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:07.538 20:15:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:07.538 20:15:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:07.538 20:15:20 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:07.538 20:15:20 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:07.538 20:15:20 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:07.538 20:15:20 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:07.538 20:15:20 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 2888958 00:06:07.538 20:15:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@946 -- # '[' -z 2888958 ']' 00:06:07.538 20:15:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # kill -0 2888958 00:06:07.538 20:15:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@951 -- # uname 00:06:07.538 20:15:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:07.538 20:15:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2888958 00:06:07.538 20:15:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:07.538 20:15:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:07.538 20:15:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2888958' 00:06:07.538 killing process with pid 2888958 00:06:07.538 20:15:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@965 -- # kill 2888958 00:06:07.538 20:15:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@970 -- # wait 2888958 00:06:07.798 00:06:07.798 real 0m1.882s 00:06:07.798 user 0m5.313s 00:06:07.798 sys 0m0.402s 00:06:07.798 20:15:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:07.798 20:15:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:07.798 ************************************ 00:06:07.798 END TEST locking_overlapped_coremask 00:06:07.798 ************************************ 00:06:07.798 20:15:20 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:07.798 20:15:20 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:07.798 20:15:20 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:07.798 20:15:20 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:07.798 ************************************ 00:06:07.798 START TEST locking_overlapped_coremask_via_rpc 00:06:07.798 ************************************ 00:06:07.798 20:15:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1121 -- # locking_overlapped_coremask_via_rpc 00:06:07.798 20:15:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=2889322 00:06:07.798 20:15:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:07.798 20:15:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 2889322 /var/tmp/spdk.sock 00:06:07.798 20:15:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 2889322 ']' 00:06:07.798 20:15:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:07.798 20:15:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:07.798 20:15:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:07.798 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:07.798 20:15:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:07.798 20:15:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:08.056 [2024-05-16 20:15:20.829328] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:06:08.056 [2024-05-16 20:15:20.829368] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2889322 ] 00:06:08.056 EAL: No free 2048 kB hugepages reported on node 1 00:06:08.056 [2024-05-16 20:15:20.889571] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:08.056 [2024-05-16 20:15:20.889595] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:08.056 [2024-05-16 20:15:20.971344] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:08.056 [2024-05-16 20:15:20.971447] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:08.056 [2024-05-16 20:15:20.971450] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.992 20:15:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:08.992 20:15:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:06:08.992 20:15:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=2889481 00:06:08.992 20:15:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 2889481 /var/tmp/spdk2.sock 00:06:08.992 20:15:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:08.992 20:15:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 2889481 ']' 00:06:08.992 20:15:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:08.992 20:15:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:08.992 20:15:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:08.992 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:08.993 20:15:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:08.993 20:15:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:08.993 [2024-05-16 20:15:21.698169] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:06:08.993 [2024-05-16 20:15:21.698222] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2889481 ] 00:06:08.993 EAL: No free 2048 kB hugepages reported on node 1 00:06:08.993 [2024-05-16 20:15:21.786048] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:08.993 [2024-05-16 20:15:21.786074] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:08.993 [2024-05-16 20:15:21.940596] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:08.993 [2024-05-16 20:15:21.944469] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:08.993 [2024-05-16 20:15:21.944470] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:06:09.561 20:15:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:09.561 20:15:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:06:09.561 20:15:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:09.561 20:15:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:09.561 20:15:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:09.561 20:15:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:09.561 20:15:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:09.561 20:15:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:06:09.561 20:15:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:09.561 20:15:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:06:09.561 20:15:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:09.561 20:15:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:06:09.561 20:15:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:09.561 20:15:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:09.561 20:15:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:09.561 20:15:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:09.561 [2024-05-16 20:15:22.535494] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2889322 has claimed it. 00:06:09.561 request: 00:06:09.561 { 00:06:09.561 "method": "framework_enable_cpumask_locks", 00:06:09.561 "req_id": 1 00:06:09.561 } 00:06:09.561 Got JSON-RPC error response 00:06:09.561 response: 00:06:09.561 { 00:06:09.561 "code": -32603, 00:06:09.561 "message": "Failed to claim CPU core: 2" 00:06:09.561 } 00:06:09.561 20:15:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:06:09.561 20:15:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:06:09.561 20:15:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:09.561 20:15:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:09.561 20:15:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:09.561 20:15:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 2889322 /var/tmp/spdk.sock 00:06:09.561 20:15:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 2889322 ']' 00:06:09.561 20:15:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:09.561 20:15:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:09.561 20:15:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:09.561 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:09.561 20:15:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:09.561 20:15:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:09.820 20:15:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:09.820 20:15:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:06:09.820 20:15:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 2889481 /var/tmp/spdk2.sock 00:06:09.820 20:15:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 2889481 ']' 00:06:09.820 20:15:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:09.820 20:15:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:09.820 20:15:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:09.820 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:09.820 20:15:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:09.820 20:15:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:10.079 20:15:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:10.079 20:15:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:06:10.079 20:15:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:10.079 20:15:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:10.079 20:15:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:10.079 20:15:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:10.079 00:06:10.079 real 0m2.138s 00:06:10.079 user 0m0.899s 00:06:10.079 sys 0m0.162s 00:06:10.079 20:15:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:10.079 20:15:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:10.079 ************************************ 00:06:10.079 END TEST locking_overlapped_coremask_via_rpc 00:06:10.079 ************************************ 00:06:10.079 20:15:22 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:10.079 20:15:22 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 2889322 ]] 00:06:10.079 20:15:22 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 2889322 00:06:10.080 20:15:22 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 2889322 ']' 00:06:10.080 20:15:22 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 2889322 00:06:10.080 20:15:22 event.cpu_locks -- common/autotest_common.sh@951 -- # uname 00:06:10.080 20:15:22 event.cpu_locks -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:10.080 20:15:22 event.cpu_locks -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2889322 00:06:10.080 20:15:22 event.cpu_locks -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:10.080 20:15:22 event.cpu_locks -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:10.080 20:15:22 event.cpu_locks -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2889322' 00:06:10.080 killing process with pid 2889322 00:06:10.080 20:15:22 event.cpu_locks -- common/autotest_common.sh@965 -- # kill 2889322 00:06:10.080 20:15:22 event.cpu_locks -- common/autotest_common.sh@970 -- # wait 2889322 00:06:10.339 20:15:23 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 2889481 ]] 00:06:10.339 20:15:23 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 2889481 00:06:10.339 20:15:23 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 2889481 ']' 00:06:10.339 20:15:23 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 2889481 00:06:10.339 20:15:23 event.cpu_locks -- common/autotest_common.sh@951 -- # uname 00:06:10.339 20:15:23 event.cpu_locks -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:10.339 20:15:23 event.cpu_locks -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2889481 00:06:10.597 20:15:23 event.cpu_locks -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:06:10.597 20:15:23 event.cpu_locks -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:06:10.597 20:15:23 event.cpu_locks -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2889481' 00:06:10.597 killing process with pid 2889481 00:06:10.597 20:15:23 event.cpu_locks -- common/autotest_common.sh@965 -- # kill 2889481 00:06:10.597 20:15:23 event.cpu_locks -- common/autotest_common.sh@970 -- # wait 2889481 00:06:10.855 20:15:23 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:10.855 20:15:23 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:10.855 20:15:23 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 2889322 ]] 00:06:10.855 20:15:23 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 2889322 00:06:10.855 20:15:23 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 2889322 ']' 00:06:10.855 20:15:23 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 2889322 00:06:10.855 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 950: kill: (2889322) - No such process 00:06:10.855 20:15:23 event.cpu_locks -- common/autotest_common.sh@973 -- # echo 'Process with pid 2889322 is not found' 00:06:10.855 Process with pid 2889322 is not found 00:06:10.855 20:15:23 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 2889481 ]] 00:06:10.855 20:15:23 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 2889481 00:06:10.855 20:15:23 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 2889481 ']' 00:06:10.855 20:15:23 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 2889481 00:06:10.855 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 950: kill: (2889481) - No such process 00:06:10.855 20:15:23 event.cpu_locks -- common/autotest_common.sh@973 -- # echo 'Process with pid 2889481 is not found' 00:06:10.855 Process with pid 2889481 is not found 00:06:10.855 20:15:23 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:10.855 00:06:10.855 real 0m16.985s 00:06:10.855 user 0m29.337s 00:06:10.855 sys 0m4.889s 00:06:10.855 20:15:23 event.cpu_locks -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:10.855 20:15:23 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:10.855 ************************************ 00:06:10.855 END TEST cpu_locks 00:06:10.855 ************************************ 00:06:10.855 00:06:10.855 real 0m42.420s 00:06:10.855 user 1m21.437s 00:06:10.855 sys 0m8.148s 00:06:10.855 20:15:23 event -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:10.855 20:15:23 event -- common/autotest_common.sh@10 -- # set +x 00:06:10.855 ************************************ 00:06:10.855 END TEST event 00:06:10.855 ************************************ 00:06:10.855 20:15:23 -- spdk/autotest.sh@182 -- # run_test thread /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/thread.sh 00:06:10.855 20:15:23 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:10.855 20:15:23 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:10.855 20:15:23 -- common/autotest_common.sh@10 -- # set +x 00:06:10.855 ************************************ 00:06:10.855 START TEST thread 00:06:10.855 ************************************ 00:06:10.855 20:15:23 thread -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/thread.sh 00:06:10.855 * Looking for test storage... 00:06:11.114 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread 00:06:11.114 20:15:23 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:11.114 20:15:23 thread -- common/autotest_common.sh@1097 -- # '[' 8 -le 1 ']' 00:06:11.114 20:15:23 thread -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:11.114 20:15:23 thread -- common/autotest_common.sh@10 -- # set +x 00:06:11.114 ************************************ 00:06:11.114 START TEST thread_poller_perf 00:06:11.114 ************************************ 00:06:11.114 20:15:23 thread.thread_poller_perf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:11.114 [2024-05-16 20:15:23.910203] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:06:11.114 [2024-05-16 20:15:23.910265] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2889893 ] 00:06:11.114 EAL: No free 2048 kB hugepages reported on node 1 00:06:11.114 [2024-05-16 20:15:23.975405] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.114 [2024-05-16 20:15:24.047723] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.114 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:12.491 ====================================== 00:06:12.491 busy:2108210624 (cyc) 00:06:12.491 total_run_count: 420000 00:06:12.491 tsc_hz: 2100000000 (cyc) 00:06:12.491 ====================================== 00:06:12.491 poller_cost: 5019 (cyc), 2390 (nsec) 00:06:12.491 00:06:12.491 real 0m1.234s 00:06:12.491 user 0m1.143s 00:06:12.491 sys 0m0.085s 00:06:12.491 20:15:25 thread.thread_poller_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:12.491 20:15:25 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:12.491 ************************************ 00:06:12.491 END TEST thread_poller_perf 00:06:12.491 ************************************ 00:06:12.491 20:15:25 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:12.491 20:15:25 thread -- common/autotest_common.sh@1097 -- # '[' 8 -le 1 ']' 00:06:12.491 20:15:25 thread -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:12.491 20:15:25 thread -- common/autotest_common.sh@10 -- # set +x 00:06:12.491 ************************************ 00:06:12.491 START TEST thread_poller_perf 00:06:12.491 ************************************ 00:06:12.491 20:15:25 thread.thread_poller_perf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:12.491 [2024-05-16 20:15:25.207991] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:06:12.491 [2024-05-16 20:15:25.208060] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2890141 ] 00:06:12.491 EAL: No free 2048 kB hugepages reported on node 1 00:06:12.491 [2024-05-16 20:15:25.274353] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.491 [2024-05-16 20:15:25.341059] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.491 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:13.426 ====================================== 00:06:13.426 busy:2101393274 (cyc) 00:06:13.426 total_run_count: 5542000 00:06:13.426 tsc_hz: 2100000000 (cyc) 00:06:13.426 ====================================== 00:06:13.426 poller_cost: 379 (cyc), 180 (nsec) 00:06:13.426 00:06:13.426 real 0m1.224s 00:06:13.426 user 0m1.144s 00:06:13.426 sys 0m0.076s 00:06:13.426 20:15:26 thread.thread_poller_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:13.426 20:15:26 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:13.426 ************************************ 00:06:13.426 END TEST thread_poller_perf 00:06:13.426 ************************************ 00:06:13.685 20:15:26 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:13.685 00:06:13.685 real 0m2.656s 00:06:13.685 user 0m2.363s 00:06:13.685 sys 0m0.289s 00:06:13.685 20:15:26 thread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:13.685 20:15:26 thread -- common/autotest_common.sh@10 -- # set +x 00:06:13.685 ************************************ 00:06:13.685 END TEST thread 00:06:13.685 ************************************ 00:06:13.685 20:15:26 -- spdk/autotest.sh@183 -- # run_test accel /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/accel.sh 00:06:13.685 20:15:26 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:13.685 20:15:26 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:13.685 20:15:26 -- common/autotest_common.sh@10 -- # set +x 00:06:13.685 ************************************ 00:06:13.685 START TEST accel 00:06:13.685 ************************************ 00:06:13.685 20:15:26 accel -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/accel.sh 00:06:13.685 * Looking for test storage... 00:06:13.685 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel 00:06:13.685 20:15:26 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:06:13.685 20:15:26 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:06:13.685 20:15:26 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:13.685 20:15:26 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=2890428 00:06:13.685 20:15:26 accel -- accel/accel.sh@63 -- # waitforlisten 2890428 00:06:13.685 20:15:26 accel -- accel/accel.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:06:13.685 20:15:26 accel -- common/autotest_common.sh@827 -- # '[' -z 2890428 ']' 00:06:13.685 20:15:26 accel -- accel/accel.sh@61 -- # build_accel_config 00:06:13.685 20:15:26 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:13.685 20:15:26 accel -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:13.685 20:15:26 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:13.685 20:15:26 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:13.685 20:15:26 accel -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:13.685 20:15:26 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:13.685 20:15:26 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:13.685 20:15:26 accel -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:13.685 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:13.685 20:15:26 accel -- accel/accel.sh@40 -- # local IFS=, 00:06:13.685 20:15:26 accel -- accel/accel.sh@41 -- # jq -r . 00:06:13.685 20:15:26 accel -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:13.685 20:15:26 accel -- common/autotest_common.sh@10 -- # set +x 00:06:13.685 [2024-05-16 20:15:26.638754] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:06:13.685 [2024-05-16 20:15:26.638805] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2890428 ] 00:06:13.685 EAL: No free 2048 kB hugepages reported on node 1 00:06:13.943 [2024-05-16 20:15:26.698853] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:13.943 [2024-05-16 20:15:26.771096] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.510 20:15:27 accel -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:14.510 20:15:27 accel -- common/autotest_common.sh@860 -- # return 0 00:06:14.510 20:15:27 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:06:14.510 20:15:27 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:06:14.510 20:15:27 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:06:14.510 20:15:27 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:06:14.510 20:15:27 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:06:14.510 20:15:27 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:06:14.510 20:15:27 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:06:14.510 20:15:27 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:14.510 20:15:27 accel -- common/autotest_common.sh@10 -- # set +x 00:06:14.510 20:15:27 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:14.510 20:15:27 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:14.510 20:15:27 accel -- accel/accel.sh@72 -- # IFS== 00:06:14.510 20:15:27 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:14.510 20:15:27 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:14.510 20:15:27 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:14.510 20:15:27 accel -- accel/accel.sh@72 -- # IFS== 00:06:14.510 20:15:27 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:14.510 20:15:27 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:14.510 20:15:27 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:14.510 20:15:27 accel -- accel/accel.sh@72 -- # IFS== 00:06:14.510 20:15:27 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:14.510 20:15:27 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:14.510 20:15:27 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:14.510 20:15:27 accel -- accel/accel.sh@72 -- # IFS== 00:06:14.510 20:15:27 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:14.510 20:15:27 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:14.510 20:15:27 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:14.510 20:15:27 accel -- accel/accel.sh@72 -- # IFS== 00:06:14.510 20:15:27 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:14.510 20:15:27 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:14.510 20:15:27 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:14.510 20:15:27 accel -- accel/accel.sh@72 -- # IFS== 00:06:14.510 20:15:27 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:14.510 20:15:27 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:14.510 20:15:27 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:14.510 20:15:27 accel -- accel/accel.sh@72 -- # IFS== 00:06:14.510 20:15:27 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:14.510 20:15:27 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:14.510 20:15:27 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:14.510 20:15:27 accel -- accel/accel.sh@72 -- # IFS== 00:06:14.510 20:15:27 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:14.511 20:15:27 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:14.511 20:15:27 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:14.511 20:15:27 accel -- accel/accel.sh@72 -- # IFS== 00:06:14.511 20:15:27 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:14.511 20:15:27 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:14.511 20:15:27 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:14.511 20:15:27 accel -- accel/accel.sh@72 -- # IFS== 00:06:14.511 20:15:27 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:14.511 20:15:27 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:14.511 20:15:27 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:14.511 20:15:27 accel -- accel/accel.sh@72 -- # IFS== 00:06:14.511 20:15:27 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:14.511 20:15:27 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:14.511 20:15:27 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:14.511 20:15:27 accel -- accel/accel.sh@72 -- # IFS== 00:06:14.511 20:15:27 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:14.511 20:15:27 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:14.511 20:15:27 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:14.511 20:15:27 accel -- accel/accel.sh@72 -- # IFS== 00:06:14.511 20:15:27 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:14.511 20:15:27 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:14.511 20:15:27 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:14.511 20:15:27 accel -- accel/accel.sh@72 -- # IFS== 00:06:14.511 20:15:27 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:14.769 20:15:27 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:14.769 20:15:27 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:14.769 20:15:27 accel -- accel/accel.sh@72 -- # IFS== 00:06:14.769 20:15:27 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:14.769 20:15:27 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:14.769 20:15:27 accel -- accel/accel.sh@75 -- # killprocess 2890428 00:06:14.769 20:15:27 accel -- common/autotest_common.sh@946 -- # '[' -z 2890428 ']' 00:06:14.769 20:15:27 accel -- common/autotest_common.sh@950 -- # kill -0 2890428 00:06:14.769 20:15:27 accel -- common/autotest_common.sh@951 -- # uname 00:06:14.769 20:15:27 accel -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:14.769 20:15:27 accel -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2890428 00:06:14.769 20:15:27 accel -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:14.769 20:15:27 accel -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:14.769 20:15:27 accel -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2890428' 00:06:14.769 killing process with pid 2890428 00:06:14.769 20:15:27 accel -- common/autotest_common.sh@965 -- # kill 2890428 00:06:14.769 20:15:27 accel -- common/autotest_common.sh@970 -- # wait 2890428 00:06:15.028 20:15:27 accel -- accel/accel.sh@76 -- # trap - ERR 00:06:15.028 20:15:27 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:06:15.028 20:15:27 accel -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:06:15.028 20:15:27 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:15.028 20:15:27 accel -- common/autotest_common.sh@10 -- # set +x 00:06:15.028 20:15:27 accel.accel_help -- common/autotest_common.sh@1121 -- # accel_perf -h 00:06:15.028 20:15:27 accel.accel_help -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:06:15.028 20:15:27 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:06:15.028 20:15:27 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:15.028 20:15:27 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:15.028 20:15:27 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:15.028 20:15:27 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:15.028 20:15:27 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:15.028 20:15:27 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:06:15.028 20:15:27 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:06:15.028 20:15:27 accel.accel_help -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:15.028 20:15:27 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:06:15.028 20:15:27 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:06:15.028 20:15:27 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:06:15.028 20:15:27 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:15.028 20:15:27 accel -- common/autotest_common.sh@10 -- # set +x 00:06:15.028 ************************************ 00:06:15.028 START TEST accel_missing_filename 00:06:15.028 ************************************ 00:06:15.028 20:15:27 accel.accel_missing_filename -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w compress 00:06:15.028 20:15:27 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:06:15.028 20:15:27 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:06:15.028 20:15:27 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:15.028 20:15:27 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:15.028 20:15:27 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:15.028 20:15:27 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:15.028 20:15:27 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:06:15.029 20:15:27 accel.accel_missing_filename -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:06:15.029 20:15:27 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:06:15.029 20:15:27 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:15.029 20:15:27 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:15.029 20:15:27 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:15.029 20:15:27 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:15.029 20:15:27 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:15.029 20:15:27 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:06:15.029 20:15:27 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:06:15.029 [2024-05-16 20:15:28.005124] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:06:15.029 [2024-05-16 20:15:28.005189] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2890700 ] 00:06:15.287 EAL: No free 2048 kB hugepages reported on node 1 00:06:15.287 [2024-05-16 20:15:28.065628] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.287 [2024-05-16 20:15:28.136736] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.287 [2024-05-16 20:15:28.177946] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:15.287 [2024-05-16 20:15:28.237534] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:06:15.546 A filename is required. 00:06:15.546 20:15:28 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:06:15.546 20:15:28 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:15.546 20:15:28 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:06:15.546 20:15:28 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:06:15.546 20:15:28 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:06:15.546 20:15:28 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:15.546 00:06:15.546 real 0m0.330s 00:06:15.546 user 0m0.243s 00:06:15.546 sys 0m0.127s 00:06:15.546 20:15:28 accel.accel_missing_filename -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:15.546 20:15:28 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:06:15.546 ************************************ 00:06:15.546 END TEST accel_missing_filename 00:06:15.546 ************************************ 00:06:15.546 20:15:28 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:06:15.546 20:15:28 accel -- common/autotest_common.sh@1097 -- # '[' 10 -le 1 ']' 00:06:15.546 20:15:28 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:15.546 20:15:28 accel -- common/autotest_common.sh@10 -- # set +x 00:06:15.546 ************************************ 00:06:15.546 START TEST accel_compress_verify 00:06:15.546 ************************************ 00:06:15.546 20:15:28 accel.accel_compress_verify -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:06:15.546 20:15:28 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:06:15.546 20:15:28 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:06:15.546 20:15:28 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:15.546 20:15:28 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:15.546 20:15:28 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:15.546 20:15:28 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:15.546 20:15:28 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:06:15.546 20:15:28 accel.accel_compress_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:06:15.546 20:15:28 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:06:15.546 20:15:28 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:15.546 20:15:28 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:15.546 20:15:28 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:15.546 20:15:28 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:15.546 20:15:28 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:15.546 20:15:28 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:06:15.546 20:15:28 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:06:15.546 [2024-05-16 20:15:28.400183] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:06:15.546 [2024-05-16 20:15:28.400253] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2890756 ] 00:06:15.546 EAL: No free 2048 kB hugepages reported on node 1 00:06:15.546 [2024-05-16 20:15:28.460481] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.546 [2024-05-16 20:15:28.531390] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.805 [2024-05-16 20:15:28.571737] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:15.805 [2024-05-16 20:15:28.630911] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:06:15.805 00:06:15.805 Compression does not support the verify option, aborting. 00:06:15.805 20:15:28 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:06:15.805 20:15:28 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:15.805 20:15:28 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:06:15.805 20:15:28 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:06:15.805 20:15:28 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:06:15.805 20:15:28 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:15.805 00:06:15.805 real 0m0.328s 00:06:15.805 user 0m0.248s 00:06:15.805 sys 0m0.119s 00:06:15.805 20:15:28 accel.accel_compress_verify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:15.805 20:15:28 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:06:15.805 ************************************ 00:06:15.805 END TEST accel_compress_verify 00:06:15.805 ************************************ 00:06:15.805 20:15:28 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:06:15.805 20:15:28 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:06:15.805 20:15:28 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:15.805 20:15:28 accel -- common/autotest_common.sh@10 -- # set +x 00:06:15.805 ************************************ 00:06:15.805 START TEST accel_wrong_workload 00:06:15.805 ************************************ 00:06:15.805 20:15:28 accel.accel_wrong_workload -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w foobar 00:06:15.805 20:15:28 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:06:15.805 20:15:28 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:06:15.805 20:15:28 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:15.805 20:15:28 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:15.805 20:15:28 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:15.805 20:15:28 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:15.805 20:15:28 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:06:15.805 20:15:28 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:06:15.805 20:15:28 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:06:15.805 20:15:28 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:15.805 20:15:28 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:15.805 20:15:28 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:15.805 20:15:28 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:15.805 20:15:28 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:15.805 20:15:28 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:06:15.805 20:15:28 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:06:15.805 Unsupported workload type: foobar 00:06:15.805 [2024-05-16 20:15:28.793297] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:06:15.805 accel_perf options: 00:06:15.805 [-h help message] 00:06:15.805 [-q queue depth per core] 00:06:15.805 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:15.805 [-T number of threads per core 00:06:15.805 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:15.805 [-t time in seconds] 00:06:15.805 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:15.805 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:06:15.805 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:15.805 [-l for compress/decompress workloads, name of uncompressed input file 00:06:15.805 [-S for crc32c workload, use this seed value (default 0) 00:06:15.805 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:15.805 [-f for fill workload, use this BYTE value (default 255) 00:06:15.805 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:15.805 [-y verify result if this switch is on] 00:06:15.805 [-a tasks to allocate per core (default: same value as -q)] 00:06:15.805 Can be used to spread operations across a wider range of memory. 00:06:16.064 20:15:28 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:06:16.064 20:15:28 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:16.065 20:15:28 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:16.065 20:15:28 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:16.065 00:06:16.065 real 0m0.030s 00:06:16.065 user 0m0.017s 00:06:16.065 sys 0m0.012s 00:06:16.065 20:15:28 accel.accel_wrong_workload -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:16.065 20:15:28 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:06:16.065 ************************************ 00:06:16.065 END TEST accel_wrong_workload 00:06:16.065 ************************************ 00:06:16.065 Error: writing output failed: Broken pipe 00:06:16.065 20:15:28 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:06:16.065 20:15:28 accel -- common/autotest_common.sh@1097 -- # '[' 10 -le 1 ']' 00:06:16.065 20:15:28 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:16.065 20:15:28 accel -- common/autotest_common.sh@10 -- # set +x 00:06:16.065 ************************************ 00:06:16.065 START TEST accel_negative_buffers 00:06:16.065 ************************************ 00:06:16.065 20:15:28 accel.accel_negative_buffers -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:06:16.065 20:15:28 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:06:16.065 20:15:28 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:06:16.065 20:15:28 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:16.065 20:15:28 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:16.065 20:15:28 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:16.065 20:15:28 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:16.065 20:15:28 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:06:16.065 20:15:28 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:06:16.065 20:15:28 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:06:16.065 20:15:28 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:16.065 20:15:28 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:16.065 20:15:28 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:16.065 20:15:28 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:16.065 20:15:28 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:16.065 20:15:28 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:06:16.065 20:15:28 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:06:16.065 -x option must be non-negative. 00:06:16.065 [2024-05-16 20:15:28.882436] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:06:16.065 accel_perf options: 00:06:16.065 [-h help message] 00:06:16.065 [-q queue depth per core] 00:06:16.065 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:16.065 [-T number of threads per core 00:06:16.065 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:16.065 [-t time in seconds] 00:06:16.065 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:16.065 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:06:16.065 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:16.065 [-l for compress/decompress workloads, name of uncompressed input file 00:06:16.065 [-S for crc32c workload, use this seed value (default 0) 00:06:16.065 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:16.065 [-f for fill workload, use this BYTE value (default 255) 00:06:16.065 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:16.065 [-y verify result if this switch is on] 00:06:16.065 [-a tasks to allocate per core (default: same value as -q)] 00:06:16.065 Can be used to spread operations across a wider range of memory. 00:06:16.065 20:15:28 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:06:16.065 20:15:28 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:16.065 20:15:28 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:16.065 20:15:28 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:16.065 00:06:16.065 real 0m0.021s 00:06:16.065 user 0m0.012s 00:06:16.065 sys 0m0.008s 00:06:16.065 20:15:28 accel.accel_negative_buffers -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:16.065 20:15:28 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:06:16.065 ************************************ 00:06:16.065 END TEST accel_negative_buffers 00:06:16.065 ************************************ 00:06:16.065 Error: writing output failed: Broken pipe 00:06:16.065 20:15:28 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:06:16.065 20:15:28 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:06:16.065 20:15:28 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:16.065 20:15:28 accel -- common/autotest_common.sh@10 -- # set +x 00:06:16.065 ************************************ 00:06:16.065 START TEST accel_crc32c 00:06:16.065 ************************************ 00:06:16.065 20:15:28 accel.accel_crc32c -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w crc32c -S 32 -y 00:06:16.065 20:15:28 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:06:16.065 20:15:28 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:06:16.065 20:15:28 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:16.065 20:15:28 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:16.065 20:15:28 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:16.065 20:15:28 accel.accel_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:16.065 20:15:28 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:06:16.065 20:15:28 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:16.065 20:15:28 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:16.065 20:15:28 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:16.065 20:15:28 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:16.065 20:15:28 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:16.065 20:15:28 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:06:16.065 20:15:28 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:06:16.065 [2024-05-16 20:15:28.981716] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:06:16.065 [2024-05-16 20:15:28.981786] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2891007 ] 00:06:16.065 EAL: No free 2048 kB hugepages reported on node 1 00:06:16.065 [2024-05-16 20:15:29.046540] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.324 [2024-05-16 20:15:29.126679] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.324 20:15:29 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:16.324 20:15:29 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:16.325 20:15:29 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:16.325 20:15:29 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:16.325 20:15:29 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:16.325 20:15:29 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:16.325 20:15:29 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:16.325 20:15:29 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:16.325 20:15:29 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:06:16.325 20:15:29 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:16.325 20:15:29 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:16.325 20:15:29 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:16.325 20:15:29 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:16.325 20:15:29 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:16.325 20:15:29 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:16.325 20:15:29 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:16.325 20:15:29 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:16.325 20:15:29 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:16.325 20:15:29 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:16.325 20:15:29 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:16.325 20:15:29 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:06:16.325 20:15:29 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:16.325 20:15:29 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:16.325 20:15:29 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:16.325 20:15:29 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:16.325 20:15:29 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:16.325 20:15:29 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:16.325 20:15:29 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:16.325 20:15:29 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:16.325 20:15:29 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:16.325 20:15:29 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:16.325 20:15:29 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:16.325 20:15:29 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:16.325 20:15:29 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:16.325 20:15:29 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:16.325 20:15:29 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:16.325 20:15:29 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:16.325 20:15:29 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:06:16.325 20:15:29 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:16.325 20:15:29 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:06:16.325 20:15:29 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:16.325 20:15:29 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:16.325 20:15:29 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:16.325 20:15:29 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:16.325 20:15:29 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:16.325 20:15:29 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:16.325 20:15:29 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:16.325 20:15:29 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:16.325 20:15:29 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:16.325 20:15:29 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:16.325 20:15:29 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:06:16.325 20:15:29 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:16.325 20:15:29 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:16.325 20:15:29 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:16.325 20:15:29 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:06:16.325 20:15:29 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:16.325 20:15:29 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:16.325 20:15:29 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:16.325 20:15:29 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:06:16.325 20:15:29 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:16.325 20:15:29 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:16.325 20:15:29 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:16.325 20:15:29 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:16.325 20:15:29 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:16.325 20:15:29 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:16.325 20:15:29 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:16.325 20:15:29 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:16.325 20:15:29 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:16.325 20:15:29 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:16.325 20:15:29 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:17.702 20:15:30 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:17.702 20:15:30 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:17.702 20:15:30 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:17.702 20:15:30 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:17.702 20:15:30 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:17.702 20:15:30 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:17.702 20:15:30 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:17.702 20:15:30 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:17.702 20:15:30 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:17.702 20:15:30 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:17.702 20:15:30 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:17.702 20:15:30 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:17.702 20:15:30 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:17.702 20:15:30 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:17.702 20:15:30 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:17.702 20:15:30 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:17.702 20:15:30 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:17.702 20:15:30 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:17.702 20:15:30 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:17.702 20:15:30 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:17.702 20:15:30 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:17.702 20:15:30 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:17.702 20:15:30 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:17.702 20:15:30 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:17.702 20:15:30 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:17.702 20:15:30 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:17.703 20:15:30 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:17.703 00:06:17.703 real 0m1.350s 00:06:17.703 user 0m1.231s 00:06:17.703 sys 0m0.125s 00:06:17.703 20:15:30 accel.accel_crc32c -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:17.703 20:15:30 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:06:17.703 ************************************ 00:06:17.703 END TEST accel_crc32c 00:06:17.703 ************************************ 00:06:17.703 20:15:30 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:06:17.703 20:15:30 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:06:17.703 20:15:30 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:17.703 20:15:30 accel -- common/autotest_common.sh@10 -- # set +x 00:06:17.703 ************************************ 00:06:17.703 START TEST accel_crc32c_C2 00:06:17.703 ************************************ 00:06:17.703 20:15:30 accel.accel_crc32c_C2 -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w crc32c -y -C 2 00:06:17.703 20:15:30 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:06:17.703 20:15:30 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:06:17.703 20:15:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:17.703 20:15:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:17.703 20:15:30 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:17.703 20:15:30 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:17.703 20:15:30 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:06:17.703 20:15:30 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:17.703 20:15:30 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:17.703 20:15:30 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:17.703 20:15:30 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:17.703 20:15:30 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:17.703 20:15:30 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:06:17.703 20:15:30 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:06:17.703 [2024-05-16 20:15:30.393438] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:06:17.703 [2024-05-16 20:15:30.393508] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2891254 ] 00:06:17.703 EAL: No free 2048 kB hugepages reported on node 1 00:06:17.703 [2024-05-16 20:15:30.454882] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:17.703 [2024-05-16 20:15:30.527709] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.703 20:15:30 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:17.703 20:15:30 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.703 20:15:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:17.703 20:15:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:17.703 20:15:30 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:17.703 20:15:30 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.703 20:15:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:17.703 20:15:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:17.703 20:15:30 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:06:17.703 20:15:30 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.703 20:15:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:17.703 20:15:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:17.703 20:15:30 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:17.703 20:15:30 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.703 20:15:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:17.703 20:15:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:17.703 20:15:30 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:17.703 20:15:30 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.703 20:15:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:17.703 20:15:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:17.703 20:15:30 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:06:17.703 20:15:30 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.703 20:15:30 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:17.703 20:15:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:17.703 20:15:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:17.703 20:15:30 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:06:17.703 20:15:30 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.703 20:15:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:17.703 20:15:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:17.703 20:15:30 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:17.703 20:15:30 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.703 20:15:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:17.703 20:15:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:17.703 20:15:30 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:17.703 20:15:30 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.703 20:15:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:17.703 20:15:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:17.703 20:15:30 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:06:17.703 20:15:30 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.703 20:15:30 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:06:17.703 20:15:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:17.703 20:15:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:17.703 20:15:30 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:17.703 20:15:30 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.703 20:15:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:17.703 20:15:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:17.703 20:15:30 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:17.703 20:15:30 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.703 20:15:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:17.703 20:15:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:17.703 20:15:30 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:06:17.703 20:15:30 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.703 20:15:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:17.703 20:15:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:17.703 20:15:30 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:17.703 20:15:30 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.703 20:15:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:17.703 20:15:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:17.703 20:15:30 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:06:17.703 20:15:30 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.703 20:15:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:17.703 20:15:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:17.703 20:15:30 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:17.703 20:15:30 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.703 20:15:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:17.703 20:15:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:17.703 20:15:30 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:17.703 20:15:30 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.703 20:15:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:17.703 20:15:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:19.080 20:15:31 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:19.080 20:15:31 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.080 20:15:31 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:19.080 20:15:31 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:19.080 20:15:31 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:19.080 20:15:31 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.080 20:15:31 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:19.080 20:15:31 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:19.080 20:15:31 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:19.080 20:15:31 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.080 20:15:31 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:19.080 20:15:31 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:19.080 20:15:31 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:19.080 20:15:31 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.080 20:15:31 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:19.080 20:15:31 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:19.080 20:15:31 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:19.080 20:15:31 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.080 20:15:31 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:19.080 20:15:31 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:19.080 20:15:31 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:19.080 20:15:31 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.080 20:15:31 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:19.080 20:15:31 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:19.080 20:15:31 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:19.080 20:15:31 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:19.080 20:15:31 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:19.080 00:06:19.080 real 0m1.336s 00:06:19.080 user 0m1.220s 00:06:19.080 sys 0m0.121s 00:06:19.080 20:15:31 accel.accel_crc32c_C2 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:19.080 20:15:31 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:06:19.080 ************************************ 00:06:19.080 END TEST accel_crc32c_C2 00:06:19.080 ************************************ 00:06:19.080 20:15:31 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:06:19.080 20:15:31 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:06:19.080 20:15:31 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:19.080 20:15:31 accel -- common/autotest_common.sh@10 -- # set +x 00:06:19.080 ************************************ 00:06:19.080 START TEST accel_copy 00:06:19.080 ************************************ 00:06:19.080 20:15:31 accel.accel_copy -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w copy -y 00:06:19.081 20:15:31 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:06:19.081 20:15:31 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:06:19.081 20:15:31 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:19.081 20:15:31 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:19.081 20:15:31 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:06:19.081 20:15:31 accel.accel_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:19.081 20:15:31 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:06:19.081 20:15:31 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:19.081 20:15:31 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:19.081 20:15:31 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:19.081 20:15:31 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:19.081 20:15:31 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:19.081 20:15:31 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:06:19.081 20:15:31 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:06:19.081 [2024-05-16 20:15:31.796291] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:06:19.081 [2024-05-16 20:15:31.796357] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2891502 ] 00:06:19.081 EAL: No free 2048 kB hugepages reported on node 1 00:06:19.081 [2024-05-16 20:15:31.856179] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.081 [2024-05-16 20:15:31.929193] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.081 20:15:31 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:19.081 20:15:31 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:19.081 20:15:31 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:19.081 20:15:31 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:19.081 20:15:31 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:19.081 20:15:31 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:19.081 20:15:31 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:19.081 20:15:31 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:19.081 20:15:31 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:06:19.081 20:15:31 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:19.081 20:15:31 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:19.081 20:15:31 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:19.081 20:15:31 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:19.081 20:15:31 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:19.081 20:15:31 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:19.081 20:15:31 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:19.081 20:15:31 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:19.081 20:15:31 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:19.081 20:15:31 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:19.081 20:15:31 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:19.081 20:15:31 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:06:19.081 20:15:31 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:19.081 20:15:31 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:06:19.081 20:15:31 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:19.081 20:15:31 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:19.081 20:15:31 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:19.081 20:15:31 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:19.081 20:15:31 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:19.081 20:15:31 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:19.081 20:15:31 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:19.081 20:15:31 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:19.081 20:15:31 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:19.081 20:15:31 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:19.081 20:15:31 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:06:19.081 20:15:31 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:19.081 20:15:31 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:06:19.081 20:15:31 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:19.081 20:15:31 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:19.081 20:15:31 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:06:19.081 20:15:31 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:19.081 20:15:31 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:19.081 20:15:31 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:19.081 20:15:31 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:06:19.081 20:15:31 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:19.081 20:15:31 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:19.081 20:15:31 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:19.081 20:15:31 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:06:19.081 20:15:31 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:19.081 20:15:31 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:19.081 20:15:31 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:19.081 20:15:31 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:06:19.081 20:15:31 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:19.081 20:15:31 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:19.081 20:15:31 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:19.081 20:15:31 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:06:19.081 20:15:31 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:19.081 20:15:31 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:19.081 20:15:31 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:19.081 20:15:31 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:19.081 20:15:31 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:19.081 20:15:31 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:19.081 20:15:31 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:19.081 20:15:31 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:19.081 20:15:31 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:19.081 20:15:31 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:19.081 20:15:31 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:20.458 20:15:33 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:20.458 20:15:33 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:20.458 20:15:33 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:20.458 20:15:33 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:20.458 20:15:33 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:20.458 20:15:33 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:20.458 20:15:33 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:20.458 20:15:33 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:20.458 20:15:33 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:20.458 20:15:33 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:20.458 20:15:33 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:20.458 20:15:33 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:20.458 20:15:33 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:20.458 20:15:33 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:20.458 20:15:33 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:20.458 20:15:33 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:20.458 20:15:33 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:20.458 20:15:33 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:20.458 20:15:33 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:20.458 20:15:33 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:20.458 20:15:33 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:20.458 20:15:33 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:20.458 20:15:33 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:20.458 20:15:33 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:20.458 20:15:33 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:20.458 20:15:33 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:06:20.458 20:15:33 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:20.458 00:06:20.458 real 0m1.334s 00:06:20.458 user 0m1.228s 00:06:20.458 sys 0m0.112s 00:06:20.458 20:15:33 accel.accel_copy -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:20.458 20:15:33 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:06:20.458 ************************************ 00:06:20.458 END TEST accel_copy 00:06:20.458 ************************************ 00:06:20.458 20:15:33 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:20.458 20:15:33 accel -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:06:20.458 20:15:33 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:20.458 20:15:33 accel -- common/autotest_common.sh@10 -- # set +x 00:06:20.458 ************************************ 00:06:20.458 START TEST accel_fill 00:06:20.458 ************************************ 00:06:20.458 20:15:33 accel.accel_fill -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:20.458 20:15:33 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:06:20.458 20:15:33 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:06:20.458 20:15:33 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:20.458 20:15:33 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:20.458 20:15:33 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:20.458 20:15:33 accel.accel_fill -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:20.458 20:15:33 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:06:20.458 20:15:33 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:20.458 20:15:33 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:20.458 20:15:33 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:20.458 20:15:33 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:20.458 20:15:33 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:20.458 20:15:33 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:06:20.458 20:15:33 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:06:20.458 [2024-05-16 20:15:33.197065] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:06:20.458 [2024-05-16 20:15:33.197127] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2891753 ] 00:06:20.458 EAL: No free 2048 kB hugepages reported on node 1 00:06:20.458 [2024-05-16 20:15:33.258855] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:20.458 [2024-05-16 20:15:33.330803] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.458 20:15:33 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:20.458 20:15:33 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:20.458 20:15:33 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:20.458 20:15:33 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:20.458 20:15:33 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:20.458 20:15:33 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:20.458 20:15:33 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:20.458 20:15:33 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:20.458 20:15:33 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:06:20.458 20:15:33 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:20.458 20:15:33 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:20.458 20:15:33 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:20.458 20:15:33 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:20.458 20:15:33 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:20.458 20:15:33 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:20.458 20:15:33 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:20.458 20:15:33 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:20.458 20:15:33 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:20.458 20:15:33 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:20.458 20:15:33 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:20.458 20:15:33 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:06:20.458 20:15:33 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:20.458 20:15:33 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:06:20.458 20:15:33 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:20.458 20:15:33 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:20.458 20:15:33 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:06:20.458 20:15:33 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:20.458 20:15:33 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:20.458 20:15:33 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:20.458 20:15:33 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:20.458 20:15:33 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:20.458 20:15:33 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:20.458 20:15:33 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:20.458 20:15:33 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:20.458 20:15:33 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:20.458 20:15:33 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:20.458 20:15:33 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:20.458 20:15:33 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:06:20.459 20:15:33 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:20.459 20:15:33 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:06:20.459 20:15:33 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:20.459 20:15:33 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:20.459 20:15:33 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:06:20.459 20:15:33 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:20.459 20:15:33 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:20.459 20:15:33 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:20.459 20:15:33 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:06:20.459 20:15:33 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:20.459 20:15:33 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:20.459 20:15:33 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:20.459 20:15:33 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:06:20.459 20:15:33 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:20.459 20:15:33 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:20.459 20:15:33 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:20.459 20:15:33 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:06:20.459 20:15:33 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:20.459 20:15:33 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:20.459 20:15:33 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:20.459 20:15:33 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:06:20.459 20:15:33 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:20.459 20:15:33 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:20.459 20:15:33 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:20.459 20:15:33 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:20.459 20:15:33 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:20.459 20:15:33 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:20.459 20:15:33 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:20.459 20:15:33 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:20.459 20:15:33 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:20.459 20:15:33 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:20.459 20:15:33 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:21.842 20:15:34 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:21.842 20:15:34 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:21.842 20:15:34 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:21.842 20:15:34 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:21.842 20:15:34 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:21.842 20:15:34 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:21.842 20:15:34 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:21.842 20:15:34 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:21.843 20:15:34 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:21.843 20:15:34 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:21.843 20:15:34 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:21.843 20:15:34 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:21.843 20:15:34 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:21.843 20:15:34 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:21.843 20:15:34 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:21.843 20:15:34 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:21.843 20:15:34 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:21.843 20:15:34 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:21.843 20:15:34 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:21.843 20:15:34 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:21.843 20:15:34 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:21.843 20:15:34 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:21.843 20:15:34 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:21.843 20:15:34 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:21.843 20:15:34 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:21.843 20:15:34 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:06:21.843 20:15:34 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:21.843 00:06:21.843 real 0m1.335s 00:06:21.843 user 0m1.222s 00:06:21.843 sys 0m0.118s 00:06:21.843 20:15:34 accel.accel_fill -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:21.843 20:15:34 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:06:21.843 ************************************ 00:06:21.843 END TEST accel_fill 00:06:21.843 ************************************ 00:06:21.843 20:15:34 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:06:21.843 20:15:34 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:06:21.843 20:15:34 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:21.843 20:15:34 accel -- common/autotest_common.sh@10 -- # set +x 00:06:21.843 ************************************ 00:06:21.843 START TEST accel_copy_crc32c 00:06:21.843 ************************************ 00:06:21.843 20:15:34 accel.accel_copy_crc32c -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w copy_crc32c -y 00:06:21.843 20:15:34 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:06:21.843 20:15:34 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:06:21.843 20:15:34 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:21.843 20:15:34 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:21.843 20:15:34 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:06:21.843 20:15:34 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:06:21.843 20:15:34 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:06:21.843 20:15:34 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:21.843 20:15:34 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:21.843 20:15:34 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:21.843 20:15:34 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:21.843 20:15:34 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:21.843 20:15:34 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:06:21.843 20:15:34 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:06:21.843 [2024-05-16 20:15:34.594112] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:06:21.843 [2024-05-16 20:15:34.594160] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2891998 ] 00:06:21.843 EAL: No free 2048 kB hugepages reported on node 1 00:06:21.843 [2024-05-16 20:15:34.653796] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.843 [2024-05-16 20:15:34.725668] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.843 20:15:34 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:21.843 20:15:34 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:21.843 20:15:34 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:21.843 20:15:34 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:21.843 20:15:34 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:21.843 20:15:34 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:21.843 20:15:34 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:21.843 20:15:34 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:21.843 20:15:34 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:06:21.843 20:15:34 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:21.843 20:15:34 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:21.843 20:15:34 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:21.843 20:15:34 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:21.843 20:15:34 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:21.843 20:15:34 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:21.843 20:15:34 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:21.843 20:15:34 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:21.843 20:15:34 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:21.843 20:15:34 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:21.843 20:15:34 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:21.843 20:15:34 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:06:21.843 20:15:34 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:21.843 20:15:34 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:06:21.843 20:15:34 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:21.843 20:15:34 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:21.843 20:15:34 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:06:21.843 20:15:34 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:21.843 20:15:34 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:21.843 20:15:34 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:21.843 20:15:34 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:21.843 20:15:34 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:21.843 20:15:34 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:21.843 20:15:34 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:21.843 20:15:34 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:21.843 20:15:34 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:21.843 20:15:34 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:21.843 20:15:34 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:21.843 20:15:34 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:21.843 20:15:34 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:21.843 20:15:34 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:21.843 20:15:34 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:21.843 20:15:34 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:06:21.843 20:15:34 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:21.843 20:15:34 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:06:21.843 20:15:34 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:21.843 20:15:34 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:21.843 20:15:34 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:06:21.843 20:15:34 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:21.843 20:15:34 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:21.843 20:15:34 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:21.843 20:15:34 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:06:21.843 20:15:34 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:21.843 20:15:34 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:21.843 20:15:34 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:21.843 20:15:34 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:06:21.843 20:15:34 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:21.843 20:15:34 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:21.843 20:15:34 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:21.843 20:15:34 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:06:21.843 20:15:34 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:21.843 20:15:34 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:21.843 20:15:34 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:21.843 20:15:34 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:06:21.843 20:15:34 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:21.843 20:15:34 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:21.843 20:15:34 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:21.843 20:15:34 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:21.843 20:15:34 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:21.843 20:15:34 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:21.843 20:15:34 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:21.843 20:15:34 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:21.843 20:15:34 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:21.843 20:15:34 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:21.843 20:15:34 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:23.223 20:15:35 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:23.223 20:15:35 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:23.223 20:15:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:23.223 20:15:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:23.223 20:15:35 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:23.223 20:15:35 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:23.223 20:15:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:23.223 20:15:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:23.223 20:15:35 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:23.223 20:15:35 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:23.223 20:15:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:23.223 20:15:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:23.223 20:15:35 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:23.223 20:15:35 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:23.223 20:15:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:23.223 20:15:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:23.223 20:15:35 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:23.223 20:15:35 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:23.223 20:15:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:23.223 20:15:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:23.223 20:15:35 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:23.223 20:15:35 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:23.223 20:15:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:23.223 20:15:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:23.223 20:15:35 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:23.223 20:15:35 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:06:23.223 20:15:35 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:23.223 00:06:23.223 real 0m1.332s 00:06:23.223 user 0m1.218s 00:06:23.223 sys 0m0.120s 00:06:23.223 20:15:35 accel.accel_copy_crc32c -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:23.223 20:15:35 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:06:23.223 ************************************ 00:06:23.223 END TEST accel_copy_crc32c 00:06:23.223 ************************************ 00:06:23.223 20:15:35 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:06:23.223 20:15:35 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:06:23.223 20:15:35 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:23.223 20:15:35 accel -- common/autotest_common.sh@10 -- # set +x 00:06:23.223 ************************************ 00:06:23.223 START TEST accel_copy_crc32c_C2 00:06:23.223 ************************************ 00:06:23.223 20:15:35 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:06:23.223 20:15:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:06:23.223 20:15:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:06:23.223 20:15:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:23.223 20:15:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:23.223 20:15:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:06:23.223 20:15:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:06:23.223 20:15:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:06:23.223 20:15:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:23.223 20:15:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:23.223 20:15:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:23.223 20:15:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:23.223 20:15:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:23.223 20:15:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:06:23.223 20:15:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:06:23.223 [2024-05-16 20:15:35.987180] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:06:23.223 [2024-05-16 20:15:35.987243] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2892258 ] 00:06:23.223 EAL: No free 2048 kB hugepages reported on node 1 00:06:23.223 [2024-05-16 20:15:36.046155] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.223 [2024-05-16 20:15:36.116664] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.223 20:15:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:23.223 20:15:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.223 20:15:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:23.223 20:15:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:23.223 20:15:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:23.223 20:15:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.223 20:15:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:23.223 20:15:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:23.223 20:15:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:06:23.223 20:15:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.223 20:15:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:23.223 20:15:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:23.223 20:15:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:23.223 20:15:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.223 20:15:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:23.223 20:15:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:23.223 20:15:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:23.223 20:15:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.223 20:15:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:23.223 20:15:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:23.223 20:15:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:06:23.223 20:15:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.223 20:15:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:06:23.223 20:15:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:23.223 20:15:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:23.223 20:15:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:06:23.223 20:15:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.223 20:15:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:23.223 20:15:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:23.223 20:15:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:23.223 20:15:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.223 20:15:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:23.223 20:15:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:23.223 20:15:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:06:23.223 20:15:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.224 20:15:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:23.224 20:15:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:23.224 20:15:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:23.224 20:15:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.224 20:15:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:23.224 20:15:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:23.224 20:15:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:06:23.224 20:15:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.224 20:15:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:06:23.224 20:15:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:23.224 20:15:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:23.224 20:15:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:23.224 20:15:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.224 20:15:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:23.224 20:15:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:23.224 20:15:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:23.224 20:15:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.224 20:15:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:23.224 20:15:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:23.224 20:15:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:06:23.224 20:15:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.224 20:15:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:23.224 20:15:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:23.224 20:15:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:23.224 20:15:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.224 20:15:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:23.224 20:15:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:23.224 20:15:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:06:23.224 20:15:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.224 20:15:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:23.224 20:15:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:23.224 20:15:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:23.224 20:15:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.224 20:15:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:23.224 20:15:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:23.224 20:15:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:23.224 20:15:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.224 20:15:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:23.224 20:15:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:24.602 20:15:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:24.602 20:15:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.602 20:15:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:24.602 20:15:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:24.602 20:15:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:24.602 20:15:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.602 20:15:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:24.602 20:15:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:24.602 20:15:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:24.602 20:15:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.602 20:15:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:24.602 20:15:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:24.602 20:15:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:24.602 20:15:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.602 20:15:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:24.602 20:15:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:24.602 20:15:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:24.602 20:15:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.602 20:15:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:24.602 20:15:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:24.602 20:15:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:24.602 20:15:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.602 20:15:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:24.602 20:15:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:24.602 20:15:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:24.602 20:15:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:06:24.602 20:15:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:24.602 00:06:24.602 real 0m1.327s 00:06:24.602 user 0m1.209s 00:06:24.602 sys 0m0.120s 00:06:24.602 20:15:37 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:24.602 20:15:37 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:06:24.602 ************************************ 00:06:24.602 END TEST accel_copy_crc32c_C2 00:06:24.602 ************************************ 00:06:24.602 20:15:37 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:06:24.602 20:15:37 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:06:24.602 20:15:37 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:24.602 20:15:37 accel -- common/autotest_common.sh@10 -- # set +x 00:06:24.602 ************************************ 00:06:24.602 START TEST accel_dualcast 00:06:24.602 ************************************ 00:06:24.602 20:15:37 accel.accel_dualcast -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dualcast -y 00:06:24.602 20:15:37 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:06:24.602 20:15:37 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:06:24.602 20:15:37 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:24.602 20:15:37 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:24.602 20:15:37 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:06:24.602 20:15:37 accel.accel_dualcast -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:06:24.602 20:15:37 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:06:24.602 20:15:37 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:24.602 20:15:37 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:24.602 20:15:37 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:24.602 20:15:37 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:24.602 20:15:37 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:24.602 20:15:37 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:06:24.602 20:15:37 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:06:24.602 [2024-05-16 20:15:37.365391] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:06:24.602 [2024-05-16 20:15:37.365435] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2892505 ] 00:06:24.602 EAL: No free 2048 kB hugepages reported on node 1 00:06:24.602 [2024-05-16 20:15:37.424047] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.602 [2024-05-16 20:15:37.494815] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.602 20:15:37 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:24.602 20:15:37 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:24.602 20:15:37 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:24.602 20:15:37 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:24.602 20:15:37 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:24.602 20:15:37 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:24.602 20:15:37 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:24.602 20:15:37 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:24.602 20:15:37 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:06:24.602 20:15:37 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:24.602 20:15:37 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:24.602 20:15:37 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:24.602 20:15:37 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:24.602 20:15:37 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:24.602 20:15:37 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:24.602 20:15:37 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:24.602 20:15:37 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:24.602 20:15:37 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:24.602 20:15:37 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:24.602 20:15:37 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:24.602 20:15:37 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:06:24.602 20:15:37 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:24.602 20:15:37 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:06:24.602 20:15:37 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:24.602 20:15:37 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:24.602 20:15:37 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:24.602 20:15:37 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:24.602 20:15:37 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:24.602 20:15:37 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:24.602 20:15:37 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:24.602 20:15:37 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:24.602 20:15:37 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:24.602 20:15:37 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:24.602 20:15:37 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:06:24.602 20:15:37 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:24.602 20:15:37 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:06:24.602 20:15:37 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:24.602 20:15:37 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:24.602 20:15:37 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:06:24.602 20:15:37 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:24.602 20:15:37 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:24.602 20:15:37 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:24.602 20:15:37 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:06:24.602 20:15:37 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:24.602 20:15:37 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:24.602 20:15:37 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:24.602 20:15:37 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:06:24.602 20:15:37 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:24.602 20:15:37 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:24.602 20:15:37 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:24.602 20:15:37 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:06:24.602 20:15:37 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:24.602 20:15:37 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:24.602 20:15:37 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:24.602 20:15:37 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:06:24.602 20:15:37 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:24.602 20:15:37 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:24.602 20:15:37 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:24.602 20:15:37 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:24.602 20:15:37 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:24.602 20:15:37 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:24.602 20:15:37 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:24.602 20:15:37 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:24.603 20:15:37 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:24.603 20:15:37 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:24.603 20:15:37 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:25.981 20:15:38 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:25.981 20:15:38 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:25.981 20:15:38 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:25.981 20:15:38 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:25.981 20:15:38 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:25.981 20:15:38 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:25.981 20:15:38 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:25.981 20:15:38 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:25.981 20:15:38 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:25.981 20:15:38 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:25.981 20:15:38 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:25.981 20:15:38 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:25.981 20:15:38 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:25.981 20:15:38 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:25.981 20:15:38 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:25.981 20:15:38 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:25.981 20:15:38 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:25.981 20:15:38 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:25.981 20:15:38 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:25.981 20:15:38 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:25.981 20:15:38 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:25.981 20:15:38 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:25.981 20:15:38 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:25.981 20:15:38 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:25.981 20:15:38 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:25.981 20:15:38 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:06:25.981 20:15:38 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:25.981 00:06:25.981 real 0m1.319s 00:06:25.981 user 0m1.212s 00:06:25.981 sys 0m0.112s 00:06:25.981 20:15:38 accel.accel_dualcast -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:25.981 20:15:38 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:06:25.981 ************************************ 00:06:25.981 END TEST accel_dualcast 00:06:25.981 ************************************ 00:06:25.981 20:15:38 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:06:25.981 20:15:38 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:06:25.982 20:15:38 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:25.982 20:15:38 accel -- common/autotest_common.sh@10 -- # set +x 00:06:25.982 ************************************ 00:06:25.982 START TEST accel_compare 00:06:25.982 ************************************ 00:06:25.982 20:15:38 accel.accel_compare -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w compare -y 00:06:25.982 20:15:38 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:06:25.982 20:15:38 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:06:25.982 20:15:38 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:25.982 20:15:38 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:25.982 20:15:38 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:06:25.982 20:15:38 accel.accel_compare -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:06:25.982 20:15:38 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:06:25.982 20:15:38 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:25.982 20:15:38 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:25.982 20:15:38 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:25.982 20:15:38 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:25.982 20:15:38 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:25.982 20:15:38 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:06:25.982 20:15:38 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:06:25.982 [2024-05-16 20:15:38.751217] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:06:25.982 [2024-05-16 20:15:38.751280] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2892752 ] 00:06:25.982 EAL: No free 2048 kB hugepages reported on node 1 00:06:25.982 [2024-05-16 20:15:38.812488] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.982 [2024-05-16 20:15:38.883517] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.982 20:15:38 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:25.982 20:15:38 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:25.982 20:15:38 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:25.982 20:15:38 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:25.982 20:15:38 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:25.982 20:15:38 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:25.982 20:15:38 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:25.982 20:15:38 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:25.982 20:15:38 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:06:25.982 20:15:38 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:25.982 20:15:38 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:25.982 20:15:38 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:25.982 20:15:38 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:25.982 20:15:38 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:25.982 20:15:38 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:25.982 20:15:38 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:25.982 20:15:38 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:25.982 20:15:38 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:25.982 20:15:38 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:25.982 20:15:38 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:25.982 20:15:38 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:06:25.982 20:15:38 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:25.982 20:15:38 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:06:25.982 20:15:38 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:25.982 20:15:38 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:25.982 20:15:38 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:25.982 20:15:38 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:25.982 20:15:38 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:25.982 20:15:38 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:25.982 20:15:38 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:25.982 20:15:38 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:25.982 20:15:38 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:25.982 20:15:38 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:25.982 20:15:38 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:06:25.982 20:15:38 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:25.982 20:15:38 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:06:25.982 20:15:38 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:25.982 20:15:38 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:25.982 20:15:38 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:06:25.982 20:15:38 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:25.982 20:15:38 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:25.982 20:15:38 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:25.982 20:15:38 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:06:25.982 20:15:38 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:25.982 20:15:38 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:25.982 20:15:38 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:25.982 20:15:38 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:06:25.982 20:15:38 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:25.982 20:15:38 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:25.982 20:15:38 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:25.982 20:15:38 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:06:25.982 20:15:38 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:25.982 20:15:38 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:25.982 20:15:38 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:25.982 20:15:38 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:06:25.982 20:15:38 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:25.982 20:15:38 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:25.982 20:15:38 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:25.982 20:15:38 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:25.982 20:15:38 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:25.982 20:15:38 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:25.982 20:15:38 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:25.982 20:15:38 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:25.982 20:15:38 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:25.982 20:15:38 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:25.982 20:15:38 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:27.359 20:15:40 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:27.359 20:15:40 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:27.359 20:15:40 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:27.359 20:15:40 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:27.359 20:15:40 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:27.359 20:15:40 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:27.359 20:15:40 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:27.359 20:15:40 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:27.359 20:15:40 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:27.359 20:15:40 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:27.359 20:15:40 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:27.359 20:15:40 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:27.359 20:15:40 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:27.359 20:15:40 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:27.359 20:15:40 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:27.359 20:15:40 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:27.359 20:15:40 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:27.359 20:15:40 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:27.359 20:15:40 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:27.359 20:15:40 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:27.359 20:15:40 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:27.359 20:15:40 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:27.359 20:15:40 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:27.359 20:15:40 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:27.359 20:15:40 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:27.359 20:15:40 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:06:27.359 20:15:40 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:27.359 00:06:27.359 real 0m1.335s 00:06:27.359 user 0m1.218s 00:06:27.359 sys 0m0.122s 00:06:27.359 20:15:40 accel.accel_compare -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:27.359 20:15:40 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:06:27.359 ************************************ 00:06:27.359 END TEST accel_compare 00:06:27.359 ************************************ 00:06:27.359 20:15:40 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:06:27.359 20:15:40 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:06:27.359 20:15:40 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:27.359 20:15:40 accel -- common/autotest_common.sh@10 -- # set +x 00:06:27.359 ************************************ 00:06:27.359 START TEST accel_xor 00:06:27.359 ************************************ 00:06:27.359 20:15:40 accel.accel_xor -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w xor -y 00:06:27.359 20:15:40 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:06:27.359 20:15:40 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:06:27.359 20:15:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:27.359 20:15:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:27.359 20:15:40 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:06:27.359 20:15:40 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:06:27.359 20:15:40 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:06:27.359 20:15:40 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:27.359 20:15:40 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:27.359 20:15:40 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:27.359 20:15:40 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:27.359 20:15:40 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:27.359 20:15:40 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:06:27.359 20:15:40 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:06:27.359 [2024-05-16 20:15:40.139518] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:06:27.359 [2024-05-16 20:15:40.139565] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2893006 ] 00:06:27.359 EAL: No free 2048 kB hugepages reported on node 1 00:06:27.359 [2024-05-16 20:15:40.199438] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.359 [2024-05-16 20:15:40.272368] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.360 20:15:40 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:27.360 20:15:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:27.360 20:15:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:27.360 20:15:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:27.360 20:15:40 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:27.360 20:15:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:27.360 20:15:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:27.360 20:15:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:27.360 20:15:40 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:06:27.360 20:15:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:27.360 20:15:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:27.360 20:15:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:27.360 20:15:40 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:27.360 20:15:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:27.360 20:15:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:27.360 20:15:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:27.360 20:15:40 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:27.360 20:15:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:27.360 20:15:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:27.360 20:15:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:27.360 20:15:40 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:06:27.360 20:15:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:27.360 20:15:40 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:06:27.360 20:15:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:27.360 20:15:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:27.360 20:15:40 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:06:27.360 20:15:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:27.360 20:15:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:27.360 20:15:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:27.360 20:15:40 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:27.360 20:15:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:27.360 20:15:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:27.360 20:15:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:27.360 20:15:40 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:27.360 20:15:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:27.360 20:15:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:27.360 20:15:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:27.360 20:15:40 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:06:27.360 20:15:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:27.360 20:15:40 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:06:27.360 20:15:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:27.360 20:15:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:27.360 20:15:40 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:27.360 20:15:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:27.360 20:15:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:27.360 20:15:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:27.360 20:15:40 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:27.360 20:15:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:27.360 20:15:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:27.360 20:15:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:27.360 20:15:40 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:06:27.360 20:15:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:27.360 20:15:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:27.360 20:15:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:27.360 20:15:40 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:06:27.360 20:15:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:27.360 20:15:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:27.360 20:15:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:27.360 20:15:40 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:06:27.360 20:15:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:27.360 20:15:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:27.360 20:15:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:27.360 20:15:40 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:27.360 20:15:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:27.360 20:15:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:27.360 20:15:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:27.360 20:15:40 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:27.360 20:15:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:27.360 20:15:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:27.360 20:15:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:28.736 20:15:41 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:28.736 20:15:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:28.736 20:15:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:28.736 20:15:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:28.736 20:15:41 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:28.736 20:15:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:28.736 20:15:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:28.736 20:15:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:28.736 20:15:41 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:28.736 20:15:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:28.736 20:15:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:28.736 20:15:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:28.736 20:15:41 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:28.736 20:15:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:28.736 20:15:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:28.736 20:15:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:28.736 20:15:41 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:28.736 20:15:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:28.736 20:15:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:28.736 20:15:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:28.736 20:15:41 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:28.736 20:15:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:28.736 20:15:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:28.736 20:15:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:28.736 20:15:41 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:28.736 20:15:41 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:06:28.736 20:15:41 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:28.736 00:06:28.736 real 0m1.333s 00:06:28.736 user 0m1.214s 00:06:28.736 sys 0m0.124s 00:06:28.736 20:15:41 accel.accel_xor -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:28.736 20:15:41 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:06:28.736 ************************************ 00:06:28.736 END TEST accel_xor 00:06:28.736 ************************************ 00:06:28.736 20:15:41 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:06:28.736 20:15:41 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:06:28.736 20:15:41 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:28.736 20:15:41 accel -- common/autotest_common.sh@10 -- # set +x 00:06:28.736 ************************************ 00:06:28.736 START TEST accel_xor 00:06:28.736 ************************************ 00:06:28.736 20:15:41 accel.accel_xor -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w xor -y -x 3 00:06:28.736 20:15:41 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:06:28.736 20:15:41 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:06:28.736 20:15:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:28.736 20:15:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:28.736 20:15:41 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:06:28.736 20:15:41 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:06:28.736 20:15:41 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:06:28.736 20:15:41 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:28.736 20:15:41 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:28.736 20:15:41 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:28.736 20:15:41 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:28.736 20:15:41 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:28.736 20:15:41 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:06:28.736 20:15:41 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:06:28.736 [2024-05-16 20:15:41.536843] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:06:28.736 [2024-05-16 20:15:41.536895] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2893253 ] 00:06:28.736 EAL: No free 2048 kB hugepages reported on node 1 00:06:28.736 [2024-05-16 20:15:41.599597] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.736 [2024-05-16 20:15:41.670674] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.736 20:15:41 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:28.736 20:15:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:28.736 20:15:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:28.736 20:15:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:28.736 20:15:41 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:28.736 20:15:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:28.736 20:15:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:28.736 20:15:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:28.736 20:15:41 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:06:28.736 20:15:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:28.736 20:15:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:28.736 20:15:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:28.736 20:15:41 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:28.736 20:15:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:28.736 20:15:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:28.736 20:15:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:28.736 20:15:41 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:28.736 20:15:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:28.736 20:15:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:28.736 20:15:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:28.736 20:15:41 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:06:28.736 20:15:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:28.736 20:15:41 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:06:28.736 20:15:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:28.736 20:15:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:28.736 20:15:41 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:06:28.736 20:15:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:28.736 20:15:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:28.736 20:15:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:28.736 20:15:41 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:28.736 20:15:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:28.736 20:15:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:28.736 20:15:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:28.736 20:15:41 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:28.736 20:15:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:28.736 20:15:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:28.736 20:15:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:28.736 20:15:41 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:06:28.736 20:15:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:28.736 20:15:41 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:06:28.736 20:15:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:28.736 20:15:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:28.736 20:15:41 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:28.736 20:15:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:28.736 20:15:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:28.736 20:15:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:28.736 20:15:41 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:28.736 20:15:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:28.736 20:15:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:28.736 20:15:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:28.736 20:15:41 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:06:28.736 20:15:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:28.736 20:15:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:28.736 20:15:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:28.736 20:15:41 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:06:28.736 20:15:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:28.736 20:15:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:28.736 20:15:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:28.736 20:15:41 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:06:28.736 20:15:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:28.736 20:15:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:28.736 20:15:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:28.736 20:15:41 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:28.736 20:15:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:28.736 20:15:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:28.736 20:15:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:28.736 20:15:41 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:28.736 20:15:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:28.737 20:15:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:28.737 20:15:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:30.176 20:15:42 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:30.176 20:15:42 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:30.176 20:15:42 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:30.176 20:15:42 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:30.176 20:15:42 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:30.176 20:15:42 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:30.176 20:15:42 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:30.176 20:15:42 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:30.176 20:15:42 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:30.176 20:15:42 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:30.176 20:15:42 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:30.176 20:15:42 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:30.176 20:15:42 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:30.176 20:15:42 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:30.176 20:15:42 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:30.176 20:15:42 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:30.176 20:15:42 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:30.176 20:15:42 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:30.176 20:15:42 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:30.177 20:15:42 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:30.177 20:15:42 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:30.177 20:15:42 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:30.177 20:15:42 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:30.177 20:15:42 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:30.177 20:15:42 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:30.177 20:15:42 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:06:30.177 20:15:42 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:30.177 00:06:30.177 real 0m1.335s 00:06:30.177 user 0m1.225s 00:06:30.177 sys 0m0.116s 00:06:30.177 20:15:42 accel.accel_xor -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:30.177 20:15:42 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:06:30.177 ************************************ 00:06:30.177 END TEST accel_xor 00:06:30.177 ************************************ 00:06:30.177 20:15:42 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:06:30.177 20:15:42 accel -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:06:30.177 20:15:42 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:30.177 20:15:42 accel -- common/autotest_common.sh@10 -- # set +x 00:06:30.177 ************************************ 00:06:30.177 START TEST accel_dif_verify 00:06:30.177 ************************************ 00:06:30.177 20:15:42 accel.accel_dif_verify -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dif_verify 00:06:30.177 20:15:42 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:06:30.177 20:15:42 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:06:30.177 20:15:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:30.177 20:15:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:30.177 20:15:42 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:06:30.177 20:15:42 accel.accel_dif_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:06:30.177 20:15:42 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:06:30.177 20:15:42 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:30.177 20:15:42 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:30.177 20:15:42 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:30.177 20:15:42 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:30.177 20:15:42 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:30.177 20:15:42 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:06:30.177 20:15:42 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:06:30.177 [2024-05-16 20:15:42.935178] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:06:30.177 [2024-05-16 20:15:42.935239] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2893501 ] 00:06:30.177 EAL: No free 2048 kB hugepages reported on node 1 00:06:30.177 [2024-05-16 20:15:42.996834] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:30.177 [2024-05-16 20:15:43.067741] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.177 20:15:43 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:30.177 20:15:43 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:30.177 20:15:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:30.177 20:15:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:30.177 20:15:43 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:30.177 20:15:43 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:30.177 20:15:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:30.177 20:15:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:30.177 20:15:43 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:06:30.177 20:15:43 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:30.177 20:15:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:30.177 20:15:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:30.177 20:15:43 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:30.177 20:15:43 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:30.177 20:15:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:30.177 20:15:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:30.177 20:15:43 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:30.177 20:15:43 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:30.177 20:15:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:30.177 20:15:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:30.177 20:15:43 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:06:30.177 20:15:43 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:30.177 20:15:43 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:06:30.177 20:15:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:30.177 20:15:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:30.177 20:15:43 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:30.177 20:15:43 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:30.177 20:15:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:30.177 20:15:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:30.177 20:15:43 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:30.177 20:15:43 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:30.177 20:15:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:30.177 20:15:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:30.177 20:15:43 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:06:30.177 20:15:43 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:30.177 20:15:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:30.177 20:15:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:30.177 20:15:43 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:06:30.177 20:15:43 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:30.177 20:15:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:30.177 20:15:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:30.177 20:15:43 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:30.177 20:15:43 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:30.177 20:15:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:30.177 20:15:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:30.177 20:15:43 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:06:30.177 20:15:43 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:30.177 20:15:43 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:06:30.177 20:15:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:30.177 20:15:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:30.177 20:15:43 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:06:30.177 20:15:43 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:30.177 20:15:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:30.177 20:15:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:30.177 20:15:43 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:06:30.177 20:15:43 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:30.177 20:15:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:30.177 20:15:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:30.177 20:15:43 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:06:30.177 20:15:43 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:30.177 20:15:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:30.177 20:15:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:30.177 20:15:43 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:06:30.177 20:15:43 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:30.177 20:15:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:30.177 20:15:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:30.177 20:15:43 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:06:30.177 20:15:43 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:30.177 20:15:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:30.177 20:15:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:30.177 20:15:43 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:30.177 20:15:43 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:30.177 20:15:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:30.177 20:15:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:30.177 20:15:43 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:30.177 20:15:43 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:30.177 20:15:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:30.177 20:15:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:31.553 20:15:44 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:31.553 20:15:44 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:31.553 20:15:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:31.553 20:15:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:31.553 20:15:44 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:31.553 20:15:44 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:31.553 20:15:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:31.553 20:15:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:31.553 20:15:44 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:31.553 20:15:44 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:31.553 20:15:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:31.553 20:15:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:31.553 20:15:44 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:31.553 20:15:44 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:31.553 20:15:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:31.553 20:15:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:31.553 20:15:44 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:31.553 20:15:44 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:31.553 20:15:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:31.553 20:15:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:31.553 20:15:44 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:31.553 20:15:44 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:31.553 20:15:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:31.553 20:15:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:31.553 20:15:44 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:31.553 20:15:44 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:06:31.553 20:15:44 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:31.553 00:06:31.553 real 0m1.334s 00:06:31.553 user 0m1.225s 00:06:31.553 sys 0m0.116s 00:06:31.553 20:15:44 accel.accel_dif_verify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:31.553 20:15:44 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:06:31.553 ************************************ 00:06:31.553 END TEST accel_dif_verify 00:06:31.553 ************************************ 00:06:31.553 20:15:44 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:06:31.553 20:15:44 accel -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:06:31.553 20:15:44 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:31.553 20:15:44 accel -- common/autotest_common.sh@10 -- # set +x 00:06:31.553 ************************************ 00:06:31.553 START TEST accel_dif_generate 00:06:31.554 ************************************ 00:06:31.554 20:15:44 accel.accel_dif_generate -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dif_generate 00:06:31.554 20:15:44 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:06:31.554 20:15:44 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:06:31.554 20:15:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:31.554 20:15:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:31.554 20:15:44 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:06:31.554 20:15:44 accel.accel_dif_generate -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:06:31.554 20:15:44 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:06:31.554 20:15:44 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:31.554 20:15:44 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:31.554 20:15:44 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:31.554 20:15:44 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:31.554 20:15:44 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:31.554 20:15:44 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:06:31.554 20:15:44 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:06:31.554 [2024-05-16 20:15:44.333209] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:06:31.554 [2024-05-16 20:15:44.333257] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2893757 ] 00:06:31.554 EAL: No free 2048 kB hugepages reported on node 1 00:06:31.554 [2024-05-16 20:15:44.392727] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.554 [2024-05-16 20:15:44.463640] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.554 20:15:44 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:31.554 20:15:44 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:31.554 20:15:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:31.554 20:15:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:31.554 20:15:44 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:31.554 20:15:44 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:31.554 20:15:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:31.554 20:15:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:31.554 20:15:44 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:06:31.554 20:15:44 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:31.554 20:15:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:31.554 20:15:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:31.554 20:15:44 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:31.554 20:15:44 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:31.554 20:15:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:31.554 20:15:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:31.554 20:15:44 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:31.554 20:15:44 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:31.554 20:15:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:31.554 20:15:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:31.554 20:15:44 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:06:31.554 20:15:44 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:31.554 20:15:44 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:06:31.554 20:15:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:31.554 20:15:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:31.554 20:15:44 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:31.554 20:15:44 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:31.554 20:15:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:31.554 20:15:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:31.554 20:15:44 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:31.554 20:15:44 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:31.554 20:15:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:31.554 20:15:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:31.554 20:15:44 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:06:31.554 20:15:44 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:31.554 20:15:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:31.554 20:15:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:31.554 20:15:44 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:06:31.554 20:15:44 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:31.554 20:15:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:31.554 20:15:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:31.554 20:15:44 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:31.554 20:15:44 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:31.554 20:15:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:31.554 20:15:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:31.554 20:15:44 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:06:31.554 20:15:44 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:31.554 20:15:44 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:06:31.554 20:15:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:31.554 20:15:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:31.554 20:15:44 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:06:31.554 20:15:44 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:31.554 20:15:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:31.554 20:15:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:31.554 20:15:44 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:06:31.554 20:15:44 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:31.554 20:15:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:31.554 20:15:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:31.554 20:15:44 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:06:31.554 20:15:44 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:31.554 20:15:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:31.554 20:15:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:31.554 20:15:44 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:06:31.554 20:15:44 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:31.554 20:15:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:31.554 20:15:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:31.554 20:15:44 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:06:31.554 20:15:44 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:31.554 20:15:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:31.554 20:15:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:31.554 20:15:44 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:31.554 20:15:44 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:31.554 20:15:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:31.554 20:15:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:31.554 20:15:44 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:31.554 20:15:44 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:31.554 20:15:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:31.554 20:15:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:32.931 20:15:45 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:32.931 20:15:45 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:32.931 20:15:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:32.931 20:15:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:32.931 20:15:45 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:32.931 20:15:45 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:32.931 20:15:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:32.931 20:15:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:32.931 20:15:45 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:32.931 20:15:45 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:32.931 20:15:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:32.931 20:15:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:32.931 20:15:45 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:32.931 20:15:45 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:32.931 20:15:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:32.931 20:15:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:32.931 20:15:45 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:32.931 20:15:45 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:32.931 20:15:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:32.932 20:15:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:32.932 20:15:45 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:32.932 20:15:45 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:32.932 20:15:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:32.932 20:15:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:32.932 20:15:45 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:32.932 20:15:45 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:06:32.932 20:15:45 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:32.932 00:06:32.932 real 0m1.330s 00:06:32.932 user 0m1.222s 00:06:32.932 sys 0m0.114s 00:06:32.932 20:15:45 accel.accel_dif_generate -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:32.932 20:15:45 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:06:32.932 ************************************ 00:06:32.932 END TEST accel_dif_generate 00:06:32.932 ************************************ 00:06:32.932 20:15:45 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:06:32.932 20:15:45 accel -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:06:32.932 20:15:45 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:32.932 20:15:45 accel -- common/autotest_common.sh@10 -- # set +x 00:06:32.932 ************************************ 00:06:32.932 START TEST accel_dif_generate_copy 00:06:32.932 ************************************ 00:06:32.932 20:15:45 accel.accel_dif_generate_copy -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dif_generate_copy 00:06:32.932 20:15:45 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:06:32.932 20:15:45 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:06:32.932 20:15:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:32.932 20:15:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:32.932 20:15:45 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:06:32.932 20:15:45 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:06:32.932 20:15:45 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:06:32.932 20:15:45 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:32.932 20:15:45 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:32.932 20:15:45 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:32.932 20:15:45 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:32.932 20:15:45 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:32.932 20:15:45 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:06:32.932 20:15:45 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:06:32.932 [2024-05-16 20:15:45.725381] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:06:32.932 [2024-05-16 20:15:45.725454] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2894002 ] 00:06:32.932 EAL: No free 2048 kB hugepages reported on node 1 00:06:32.932 [2024-05-16 20:15:45.785869] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.932 [2024-05-16 20:15:45.857759] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.932 20:15:45 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:32.932 20:15:45 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:32.932 20:15:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:32.932 20:15:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:32.932 20:15:45 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:32.932 20:15:45 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:32.932 20:15:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:32.932 20:15:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:32.932 20:15:45 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:06:32.932 20:15:45 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:32.932 20:15:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:32.932 20:15:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:32.932 20:15:45 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:32.932 20:15:45 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:32.932 20:15:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:32.932 20:15:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:32.932 20:15:45 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:32.932 20:15:45 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:32.932 20:15:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:32.932 20:15:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:32.932 20:15:45 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:06:32.932 20:15:45 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:32.932 20:15:45 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:06:32.932 20:15:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:32.932 20:15:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:32.932 20:15:45 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:32.932 20:15:45 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:32.932 20:15:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:32.932 20:15:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:32.932 20:15:45 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:32.932 20:15:45 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:32.932 20:15:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:32.932 20:15:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:32.932 20:15:45 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:32.932 20:15:45 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:32.932 20:15:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:32.932 20:15:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:32.932 20:15:45 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:06:32.932 20:15:45 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:32.932 20:15:45 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:06:32.932 20:15:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:32.932 20:15:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:32.932 20:15:45 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:06:32.932 20:15:45 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:32.932 20:15:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:32.932 20:15:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:32.932 20:15:45 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:06:32.932 20:15:45 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:32.932 20:15:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:32.932 20:15:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:32.932 20:15:45 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:06:32.932 20:15:45 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:32.932 20:15:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:32.932 20:15:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:32.932 20:15:45 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:06:32.932 20:15:45 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:32.932 20:15:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:32.932 20:15:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:32.932 20:15:45 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:06:32.932 20:15:45 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:32.932 20:15:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:32.932 20:15:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:32.932 20:15:45 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:32.932 20:15:45 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:32.932 20:15:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:32.932 20:15:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:32.932 20:15:45 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:32.932 20:15:45 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:32.932 20:15:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:32.932 20:15:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:34.306 20:15:47 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:34.307 20:15:47 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:34.307 20:15:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:34.307 20:15:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:34.307 20:15:47 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:34.307 20:15:47 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:34.307 20:15:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:34.307 20:15:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:34.307 20:15:47 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:34.307 20:15:47 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:34.307 20:15:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:34.307 20:15:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:34.307 20:15:47 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:34.307 20:15:47 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:34.307 20:15:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:34.307 20:15:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:34.307 20:15:47 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:34.307 20:15:47 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:34.307 20:15:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:34.307 20:15:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:34.307 20:15:47 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:34.307 20:15:47 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:34.307 20:15:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:34.307 20:15:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:34.307 20:15:47 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:34.307 20:15:47 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:06:34.307 20:15:47 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:34.307 00:06:34.307 real 0m1.334s 00:06:34.307 user 0m1.222s 00:06:34.307 sys 0m0.118s 00:06:34.307 20:15:47 accel.accel_dif_generate_copy -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:34.307 20:15:47 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:06:34.307 ************************************ 00:06:34.307 END TEST accel_dif_generate_copy 00:06:34.307 ************************************ 00:06:34.307 20:15:47 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:06:34.307 20:15:47 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:06:34.307 20:15:47 accel -- common/autotest_common.sh@1097 -- # '[' 8 -le 1 ']' 00:06:34.307 20:15:47 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:34.307 20:15:47 accel -- common/autotest_common.sh@10 -- # set +x 00:06:34.307 ************************************ 00:06:34.307 START TEST accel_comp 00:06:34.307 ************************************ 00:06:34.307 20:15:47 accel.accel_comp -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:06:34.307 20:15:47 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:06:34.307 20:15:47 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:06:34.307 20:15:47 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:34.307 20:15:47 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:34.307 20:15:47 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:06:34.307 20:15:47 accel.accel_comp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:06:34.307 20:15:47 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:06:34.307 20:15:47 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:34.307 20:15:47 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:34.307 20:15:47 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:34.307 20:15:47 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:34.307 20:15:47 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:34.307 20:15:47 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:06:34.307 20:15:47 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:06:34.307 [2024-05-16 20:15:47.120980] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:06:34.307 [2024-05-16 20:15:47.121026] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2894249 ] 00:06:34.307 EAL: No free 2048 kB hugepages reported on node 1 00:06:34.307 [2024-05-16 20:15:47.180886] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:34.307 [2024-05-16 20:15:47.251920] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.307 20:15:47 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:34.307 20:15:47 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:34.307 20:15:47 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:34.307 20:15:47 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:34.307 20:15:47 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:34.307 20:15:47 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:34.307 20:15:47 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:34.307 20:15:47 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:34.307 20:15:47 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:34.307 20:15:47 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:34.307 20:15:47 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:34.307 20:15:47 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:34.307 20:15:47 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:06:34.307 20:15:47 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:34.307 20:15:47 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:34.307 20:15:47 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:34.307 20:15:47 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:34.307 20:15:47 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:34.307 20:15:47 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:34.307 20:15:47 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:34.307 20:15:47 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:34.307 20:15:47 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:34.307 20:15:47 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:34.307 20:15:47 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:34.307 20:15:47 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:06:34.307 20:15:47 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:34.307 20:15:47 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:06:34.307 20:15:47 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:34.307 20:15:47 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:34.307 20:15:47 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:34.307 20:15:47 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:34.307 20:15:47 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:34.307 20:15:47 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:34.307 20:15:47 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:34.307 20:15:47 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:34.307 20:15:47 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:34.307 20:15:47 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:34.307 20:15:47 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:06:34.307 20:15:47 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:34.307 20:15:47 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:06:34.307 20:15:47 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:34.307 20:15:47 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:34.566 20:15:47 accel.accel_comp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:06:34.566 20:15:47 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:34.566 20:15:47 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:34.566 20:15:47 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:34.566 20:15:47 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:06:34.566 20:15:47 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:34.566 20:15:47 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:34.566 20:15:47 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:34.566 20:15:47 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:06:34.566 20:15:47 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:34.566 20:15:47 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:34.567 20:15:47 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:34.567 20:15:47 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:06:34.567 20:15:47 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:34.567 20:15:47 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:34.567 20:15:47 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:34.567 20:15:47 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:06:34.567 20:15:47 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:34.567 20:15:47 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:34.567 20:15:47 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:34.567 20:15:47 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:06:34.567 20:15:47 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:34.567 20:15:47 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:34.567 20:15:47 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:34.567 20:15:47 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:34.567 20:15:47 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:34.567 20:15:47 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:34.567 20:15:47 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:34.567 20:15:47 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:34.567 20:15:47 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:34.567 20:15:47 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:34.567 20:15:47 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:35.504 20:15:48 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:35.504 20:15:48 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:35.504 20:15:48 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:35.504 20:15:48 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:35.504 20:15:48 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:35.504 20:15:48 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:35.504 20:15:48 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:35.504 20:15:48 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:35.504 20:15:48 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:35.504 20:15:48 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:35.504 20:15:48 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:35.504 20:15:48 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:35.504 20:15:48 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:35.504 20:15:48 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:35.504 20:15:48 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:35.504 20:15:48 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:35.504 20:15:48 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:35.504 20:15:48 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:35.504 20:15:48 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:35.504 20:15:48 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:35.504 20:15:48 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:35.504 20:15:48 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:35.504 20:15:48 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:35.504 20:15:48 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:35.504 20:15:48 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:35.504 20:15:48 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:06:35.504 20:15:48 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:35.504 00:06:35.504 real 0m1.333s 00:06:35.504 user 0m1.220s 00:06:35.504 sys 0m0.119s 00:06:35.505 20:15:48 accel.accel_comp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:35.505 20:15:48 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:06:35.505 ************************************ 00:06:35.505 END TEST accel_comp 00:06:35.505 ************************************ 00:06:35.505 20:15:48 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:06:35.505 20:15:48 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:06:35.505 20:15:48 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:35.505 20:15:48 accel -- common/autotest_common.sh@10 -- # set +x 00:06:35.505 ************************************ 00:06:35.505 START TEST accel_decomp 00:06:35.505 ************************************ 00:06:35.505 20:15:48 accel.accel_decomp -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:06:35.505 20:15:48 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:06:35.505 20:15:48 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:06:35.505 20:15:48 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:35.505 20:15:48 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:35.505 20:15:48 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:06:35.505 20:15:48 accel.accel_decomp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:06:35.505 20:15:48 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:06:35.505 20:15:48 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:35.505 20:15:48 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:35.505 20:15:48 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:35.505 20:15:48 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:35.505 20:15:48 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:35.505 20:15:48 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:06:35.763 20:15:48 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:06:35.764 [2024-05-16 20:15:48.516403] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:06:35.764 [2024-05-16 20:15:48.516452] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2894504 ] 00:06:35.764 EAL: No free 2048 kB hugepages reported on node 1 00:06:35.764 [2024-05-16 20:15:48.575857] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.764 [2024-05-16 20:15:48.648033] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.764 20:15:48 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:35.764 20:15:48 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:35.764 20:15:48 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:35.764 20:15:48 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:35.764 20:15:48 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:35.764 20:15:48 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:35.764 20:15:48 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:35.764 20:15:48 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:35.764 20:15:48 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:35.764 20:15:48 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:35.764 20:15:48 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:35.764 20:15:48 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:35.764 20:15:48 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:06:35.764 20:15:48 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:35.764 20:15:48 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:35.764 20:15:48 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:35.764 20:15:48 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:35.764 20:15:48 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:35.764 20:15:48 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:35.764 20:15:48 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:35.764 20:15:48 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:35.764 20:15:48 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:35.764 20:15:48 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:35.764 20:15:48 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:35.764 20:15:48 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:06:35.764 20:15:48 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:35.764 20:15:48 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:35.764 20:15:48 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:35.764 20:15:48 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:35.764 20:15:48 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:35.764 20:15:48 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:35.764 20:15:48 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:35.764 20:15:48 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:35.764 20:15:48 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:35.764 20:15:48 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:35.764 20:15:48 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:35.764 20:15:48 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:35.764 20:15:48 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:06:35.764 20:15:48 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:35.764 20:15:48 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:06:35.764 20:15:48 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:35.764 20:15:48 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:35.764 20:15:48 accel.accel_decomp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:06:35.764 20:15:48 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:35.764 20:15:48 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:35.764 20:15:48 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:35.764 20:15:48 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:06:35.764 20:15:48 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:35.764 20:15:48 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:35.764 20:15:48 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:35.764 20:15:48 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:06:35.764 20:15:48 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:35.764 20:15:48 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:35.764 20:15:48 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:35.764 20:15:48 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:06:35.764 20:15:48 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:35.764 20:15:48 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:35.764 20:15:48 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:35.764 20:15:48 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:06:35.764 20:15:48 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:35.764 20:15:48 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:35.764 20:15:48 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:35.764 20:15:48 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:06:35.764 20:15:48 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:35.764 20:15:48 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:35.764 20:15:48 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:35.764 20:15:48 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:35.764 20:15:48 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:35.764 20:15:48 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:35.764 20:15:48 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:35.764 20:15:48 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:35.764 20:15:48 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:35.764 20:15:48 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:35.764 20:15:48 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:37.140 20:15:49 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:37.140 20:15:49 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:37.140 20:15:49 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:37.140 20:15:49 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:37.140 20:15:49 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:37.140 20:15:49 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:37.140 20:15:49 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:37.140 20:15:49 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:37.140 20:15:49 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:37.140 20:15:49 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:37.140 20:15:49 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:37.140 20:15:49 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:37.140 20:15:49 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:37.140 20:15:49 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:37.140 20:15:49 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:37.140 20:15:49 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:37.140 20:15:49 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:37.140 20:15:49 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:37.140 20:15:49 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:37.140 20:15:49 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:37.140 20:15:49 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:37.140 20:15:49 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:37.140 20:15:49 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:37.140 20:15:49 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:37.140 20:15:49 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:37.140 20:15:49 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:37.140 20:15:49 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:37.140 00:06:37.140 real 0m1.333s 00:06:37.140 user 0m1.219s 00:06:37.140 sys 0m0.119s 00:06:37.140 20:15:49 accel.accel_decomp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:37.140 20:15:49 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:06:37.140 ************************************ 00:06:37.140 END TEST accel_decomp 00:06:37.140 ************************************ 00:06:37.140 20:15:49 accel -- accel/accel.sh@118 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:37.140 20:15:49 accel -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:06:37.140 20:15:49 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:37.140 20:15:49 accel -- common/autotest_common.sh@10 -- # set +x 00:06:37.140 ************************************ 00:06:37.141 START TEST accel_decmop_full 00:06:37.141 ************************************ 00:06:37.141 20:15:49 accel.accel_decmop_full -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:37.141 20:15:49 accel.accel_decmop_full -- accel/accel.sh@16 -- # local accel_opc 00:06:37.141 20:15:49 accel.accel_decmop_full -- accel/accel.sh@17 -- # local accel_module 00:06:37.141 20:15:49 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:37.141 20:15:49 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:37.141 20:15:49 accel.accel_decmop_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:37.141 20:15:49 accel.accel_decmop_full -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:37.141 20:15:49 accel.accel_decmop_full -- accel/accel.sh@12 -- # build_accel_config 00:06:37.141 20:15:49 accel.accel_decmop_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:37.141 20:15:49 accel.accel_decmop_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:37.141 20:15:49 accel.accel_decmop_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:37.141 20:15:49 accel.accel_decmop_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:37.141 20:15:49 accel.accel_decmop_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:37.141 20:15:49 accel.accel_decmop_full -- accel/accel.sh@40 -- # local IFS=, 00:06:37.141 20:15:49 accel.accel_decmop_full -- accel/accel.sh@41 -- # jq -r . 00:06:37.141 [2024-05-16 20:15:49.913380] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:06:37.141 [2024-05-16 20:15:49.913434] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2894754 ] 00:06:37.141 EAL: No free 2048 kB hugepages reported on node 1 00:06:37.141 [2024-05-16 20:15:49.973385] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.141 [2024-05-16 20:15:50.047267] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.141 20:15:50 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:37.141 20:15:50 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:37.141 20:15:50 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:37.141 20:15:50 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:37.141 20:15:50 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:37.141 20:15:50 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:37.141 20:15:50 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:37.141 20:15:50 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:37.141 20:15:50 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:37.141 20:15:50 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:37.141 20:15:50 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:37.141 20:15:50 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:37.141 20:15:50 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=0x1 00:06:37.141 20:15:50 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:37.141 20:15:50 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:37.141 20:15:50 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:37.141 20:15:50 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:37.141 20:15:50 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:37.141 20:15:50 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:37.141 20:15:50 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:37.141 20:15:50 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:37.141 20:15:50 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:37.141 20:15:50 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:37.141 20:15:50 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:37.141 20:15:50 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=decompress 00:06:37.141 20:15:50 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:37.141 20:15:50 accel.accel_decmop_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:37.141 20:15:50 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:37.141 20:15:50 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:37.141 20:15:50 accel.accel_decmop_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:37.141 20:15:50 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:37.141 20:15:50 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:37.141 20:15:50 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:37.141 20:15:50 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:37.141 20:15:50 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:37.141 20:15:50 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:37.141 20:15:50 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:37.141 20:15:50 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=software 00:06:37.141 20:15:50 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:37.141 20:15:50 accel.accel_decmop_full -- accel/accel.sh@22 -- # accel_module=software 00:06:37.141 20:15:50 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:37.141 20:15:50 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:37.141 20:15:50 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:06:37.141 20:15:50 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:37.141 20:15:50 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:37.141 20:15:50 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:37.141 20:15:50 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=32 00:06:37.141 20:15:50 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:37.141 20:15:50 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:37.141 20:15:50 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:37.141 20:15:50 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=32 00:06:37.141 20:15:50 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:37.141 20:15:50 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:37.141 20:15:50 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:37.141 20:15:50 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=1 00:06:37.141 20:15:50 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:37.141 20:15:50 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:37.141 20:15:50 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:37.141 20:15:50 accel.accel_decmop_full -- accel/accel.sh@20 -- # val='1 seconds' 00:06:37.141 20:15:50 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:37.141 20:15:50 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:37.141 20:15:50 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:37.141 20:15:50 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=Yes 00:06:37.141 20:15:50 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:37.141 20:15:50 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:37.141 20:15:50 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:37.141 20:15:50 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:37.141 20:15:50 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:37.141 20:15:50 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:37.141 20:15:50 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:37.141 20:15:50 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:37.141 20:15:50 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:37.141 20:15:50 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:37.141 20:15:50 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:38.518 20:15:51 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:38.518 20:15:51 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:38.518 20:15:51 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:38.518 20:15:51 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:38.518 20:15:51 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:38.518 20:15:51 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:38.518 20:15:51 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:38.518 20:15:51 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:38.518 20:15:51 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:38.518 20:15:51 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:38.518 20:15:51 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:38.518 20:15:51 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:38.518 20:15:51 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:38.518 20:15:51 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:38.518 20:15:51 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:38.518 20:15:51 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:38.518 20:15:51 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:38.518 20:15:51 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:38.518 20:15:51 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:38.518 20:15:51 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:38.518 20:15:51 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:38.518 20:15:51 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:38.518 20:15:51 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:38.518 20:15:51 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:38.518 20:15:51 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:38.518 20:15:51 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:38.518 20:15:51 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:38.518 00:06:38.518 real 0m1.347s 00:06:38.518 user 0m1.238s 00:06:38.518 sys 0m0.114s 00:06:38.518 20:15:51 accel.accel_decmop_full -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:38.518 20:15:51 accel.accel_decmop_full -- common/autotest_common.sh@10 -- # set +x 00:06:38.518 ************************************ 00:06:38.518 END TEST accel_decmop_full 00:06:38.518 ************************************ 00:06:38.518 20:15:51 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:38.518 20:15:51 accel -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:06:38.518 20:15:51 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:38.518 20:15:51 accel -- common/autotest_common.sh@10 -- # set +x 00:06:38.518 ************************************ 00:06:38.518 START TEST accel_decomp_mcore 00:06:38.518 ************************************ 00:06:38.518 20:15:51 accel.accel_decomp_mcore -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:38.518 20:15:51 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:06:38.518 20:15:51 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:06:38.518 20:15:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:38.518 20:15:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:38.518 20:15:51 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:38.518 20:15:51 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:38.518 20:15:51 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:06:38.518 20:15:51 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:38.518 20:15:51 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:38.518 20:15:51 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:38.518 20:15:51 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:38.518 20:15:51 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:38.518 20:15:51 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:06:38.518 20:15:51 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:06:38.518 [2024-05-16 20:15:51.324447] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:06:38.518 [2024-05-16 20:15:51.324513] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2895006 ] 00:06:38.518 EAL: No free 2048 kB hugepages reported on node 1 00:06:38.518 [2024-05-16 20:15:51.385542] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:38.518 [2024-05-16 20:15:51.459117] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:38.518 [2024-05-16 20:15:51.459216] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:38.518 [2024-05-16 20:15:51.459304] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:38.518 [2024-05-16 20:15:51.459306] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.518 20:15:51 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:38.518 20:15:51 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:38.518 20:15:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:38.518 20:15:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:38.518 20:15:51 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:38.518 20:15:51 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:38.518 20:15:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:38.518 20:15:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:38.518 20:15:51 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:38.518 20:15:51 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:38.518 20:15:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:38.518 20:15:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:38.518 20:15:51 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:06:38.518 20:15:51 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:38.518 20:15:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:38.518 20:15:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:38.518 20:15:51 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:38.518 20:15:51 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:38.518 20:15:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:38.518 20:15:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:38.518 20:15:51 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:38.518 20:15:51 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:38.518 20:15:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:38.518 20:15:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:38.518 20:15:51 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:06:38.777 20:15:51 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:38.777 20:15:51 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:38.777 20:15:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:38.777 20:15:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:38.777 20:15:51 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:38.777 20:15:51 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:38.777 20:15:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:38.777 20:15:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:38.777 20:15:51 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:38.777 20:15:51 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:38.777 20:15:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:38.777 20:15:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:38.777 20:15:51 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:06:38.777 20:15:51 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:38.777 20:15:51 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:06:38.777 20:15:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:38.777 20:15:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:38.777 20:15:51 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:06:38.777 20:15:51 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:38.777 20:15:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:38.777 20:15:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:38.777 20:15:51 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:06:38.777 20:15:51 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:38.777 20:15:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:38.777 20:15:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:38.777 20:15:51 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:06:38.777 20:15:51 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:38.777 20:15:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:38.777 20:15:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:38.777 20:15:51 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:06:38.777 20:15:51 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:38.777 20:15:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:38.777 20:15:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:38.777 20:15:51 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:06:38.777 20:15:51 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:38.777 20:15:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:38.777 20:15:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:38.777 20:15:51 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:06:38.777 20:15:51 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:38.777 20:15:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:38.777 20:15:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:38.777 20:15:51 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:38.777 20:15:51 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:38.777 20:15:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:38.777 20:15:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:38.777 20:15:51 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:38.777 20:15:51 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:38.777 20:15:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:38.777 20:15:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:39.713 20:15:52 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:39.713 20:15:52 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:39.713 20:15:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:39.713 20:15:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:39.713 20:15:52 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:39.713 20:15:52 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:39.713 20:15:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:39.713 20:15:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:39.713 20:15:52 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:39.713 20:15:52 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:39.713 20:15:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:39.713 20:15:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:39.713 20:15:52 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:39.713 20:15:52 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:39.713 20:15:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:39.713 20:15:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:39.713 20:15:52 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:39.713 20:15:52 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:39.713 20:15:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:39.713 20:15:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:39.713 20:15:52 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:39.713 20:15:52 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:39.713 20:15:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:39.713 20:15:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:39.713 20:15:52 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:39.713 20:15:52 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:39.713 20:15:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:39.713 20:15:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:39.713 20:15:52 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:39.713 20:15:52 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:39.713 20:15:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:39.713 20:15:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:39.713 20:15:52 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:39.713 20:15:52 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:39.713 20:15:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:39.713 20:15:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:39.713 20:15:52 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:39.713 20:15:52 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:39.713 20:15:52 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:39.713 00:06:39.713 real 0m1.353s 00:06:39.713 user 0m4.568s 00:06:39.713 sys 0m0.128s 00:06:39.713 20:15:52 accel.accel_decomp_mcore -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:39.713 20:15:52 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:06:39.713 ************************************ 00:06:39.713 END TEST accel_decomp_mcore 00:06:39.713 ************************************ 00:06:39.713 20:15:52 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:39.713 20:15:52 accel -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:06:39.713 20:15:52 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:39.713 20:15:52 accel -- common/autotest_common.sh@10 -- # set +x 00:06:39.972 ************************************ 00:06:39.972 START TEST accel_decomp_full_mcore 00:06:39.972 ************************************ 00:06:39.972 20:15:52 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:39.972 20:15:52 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:06:39.972 20:15:52 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:06:39.972 20:15:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:39.972 20:15:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:39.972 20:15:52 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:39.972 20:15:52 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:39.972 20:15:52 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:06:39.972 20:15:52 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:39.972 20:15:52 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:39.972 20:15:52 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:39.972 20:15:52 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:39.972 20:15:52 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:39.972 20:15:52 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:06:39.972 20:15:52 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:06:39.972 [2024-05-16 20:15:52.750088] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:06:39.972 [2024-05-16 20:15:52.750158] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2895258 ] 00:06:39.972 EAL: No free 2048 kB hugepages reported on node 1 00:06:39.972 [2024-05-16 20:15:52.814001] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:39.972 [2024-05-16 20:15:52.888795] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:39.972 [2024-05-16 20:15:52.888894] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:39.972 [2024-05-16 20:15:52.888970] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:39.972 [2024-05-16 20:15:52.888971] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.972 20:15:52 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:39.972 20:15:52 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:39.972 20:15:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:39.972 20:15:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:39.972 20:15:52 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:39.972 20:15:52 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:39.972 20:15:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:39.972 20:15:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:39.972 20:15:52 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:39.972 20:15:52 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:39.972 20:15:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:39.972 20:15:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:39.972 20:15:52 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:06:39.972 20:15:52 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:39.972 20:15:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:39.972 20:15:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:39.972 20:15:52 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:39.972 20:15:52 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:39.972 20:15:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:39.972 20:15:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:39.972 20:15:52 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:39.972 20:15:52 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:39.972 20:15:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:39.972 20:15:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:39.972 20:15:52 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:06:39.972 20:15:52 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:39.972 20:15:52 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:39.972 20:15:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:39.972 20:15:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:39.972 20:15:52 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:39.972 20:15:52 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:39.972 20:15:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:39.973 20:15:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:39.973 20:15:52 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:39.973 20:15:52 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:39.973 20:15:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:39.973 20:15:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:39.973 20:15:52 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:06:39.973 20:15:52 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:39.973 20:15:52 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:06:39.973 20:15:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:39.973 20:15:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:39.973 20:15:52 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:06:39.973 20:15:52 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:39.973 20:15:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:39.973 20:15:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:39.973 20:15:52 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:06:39.973 20:15:52 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:39.973 20:15:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:39.973 20:15:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:39.973 20:15:52 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:06:39.973 20:15:52 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:39.973 20:15:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:39.973 20:15:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:39.973 20:15:52 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:06:39.973 20:15:52 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:39.973 20:15:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:39.973 20:15:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:39.973 20:15:52 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:06:39.973 20:15:52 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:39.973 20:15:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:39.973 20:15:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:39.973 20:15:52 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:06:39.973 20:15:52 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:39.973 20:15:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:39.973 20:15:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:39.973 20:15:52 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:39.973 20:15:52 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:39.973 20:15:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:39.973 20:15:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:39.973 20:15:52 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:39.973 20:15:52 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:39.973 20:15:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:39.973 20:15:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:41.349 20:15:54 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:41.349 20:15:54 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:41.349 20:15:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:41.349 20:15:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:41.349 20:15:54 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:41.349 20:15:54 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:41.349 20:15:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:41.349 20:15:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:41.349 20:15:54 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:41.349 20:15:54 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:41.349 20:15:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:41.349 20:15:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:41.349 20:15:54 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:41.349 20:15:54 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:41.349 20:15:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:41.349 20:15:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:41.349 20:15:54 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:41.349 20:15:54 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:41.349 20:15:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:41.349 20:15:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:41.349 20:15:54 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:41.349 20:15:54 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:41.349 20:15:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:41.349 20:15:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:41.349 20:15:54 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:41.349 20:15:54 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:41.349 20:15:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:41.349 20:15:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:41.349 20:15:54 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:41.349 20:15:54 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:41.349 20:15:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:41.349 20:15:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:41.349 20:15:54 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:41.349 20:15:54 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:41.349 20:15:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:41.349 20:15:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:41.349 20:15:54 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:41.349 20:15:54 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:41.349 20:15:54 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:41.349 00:06:41.349 real 0m1.371s 00:06:41.350 user 0m4.615s 00:06:41.350 sys 0m0.132s 00:06:41.350 20:15:54 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:41.350 20:15:54 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:06:41.350 ************************************ 00:06:41.350 END TEST accel_decomp_full_mcore 00:06:41.350 ************************************ 00:06:41.350 20:15:54 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:41.350 20:15:54 accel -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:06:41.350 20:15:54 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:41.350 20:15:54 accel -- common/autotest_common.sh@10 -- # set +x 00:06:41.350 ************************************ 00:06:41.350 START TEST accel_decomp_mthread 00:06:41.350 ************************************ 00:06:41.350 20:15:54 accel.accel_decomp_mthread -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:41.350 20:15:54 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:06:41.350 20:15:54 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:06:41.350 20:15:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:41.350 20:15:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:41.350 20:15:54 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:41.350 20:15:54 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:41.350 20:15:54 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:06:41.350 20:15:54 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:41.350 20:15:54 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:41.350 20:15:54 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:41.350 20:15:54 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:41.350 20:15:54 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:41.350 20:15:54 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:06:41.350 20:15:54 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:06:41.350 [2024-05-16 20:15:54.191766] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:06:41.350 [2024-05-16 20:15:54.191828] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2895510 ] 00:06:41.350 EAL: No free 2048 kB hugepages reported on node 1 00:06:41.350 [2024-05-16 20:15:54.253565] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.350 [2024-05-16 20:15:54.325429] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.609 20:15:54 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:41.609 20:15:54 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:41.609 20:15:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:41.609 20:15:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:41.609 20:15:54 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:41.609 20:15:54 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:41.609 20:15:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:41.609 20:15:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:41.609 20:15:54 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:41.609 20:15:54 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:41.609 20:15:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:41.609 20:15:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:41.609 20:15:54 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:06:41.609 20:15:54 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:41.609 20:15:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:41.609 20:15:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:41.609 20:15:54 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:41.609 20:15:54 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:41.609 20:15:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:41.609 20:15:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:41.609 20:15:54 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:41.609 20:15:54 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:41.609 20:15:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:41.609 20:15:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:41.609 20:15:54 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:06:41.609 20:15:54 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:41.609 20:15:54 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:41.609 20:15:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:41.609 20:15:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:41.609 20:15:54 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:41.609 20:15:54 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:41.609 20:15:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:41.609 20:15:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:41.609 20:15:54 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:41.609 20:15:54 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:41.609 20:15:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:41.609 20:15:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:41.609 20:15:54 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:06:41.610 20:15:54 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:41.610 20:15:54 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:06:41.610 20:15:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:41.610 20:15:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:41.610 20:15:54 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:06:41.610 20:15:54 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:41.610 20:15:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:41.610 20:15:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:41.610 20:15:54 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:06:41.610 20:15:54 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:41.610 20:15:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:41.610 20:15:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:41.610 20:15:54 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:06:41.610 20:15:54 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:41.610 20:15:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:41.610 20:15:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:41.610 20:15:54 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:06:41.610 20:15:54 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:41.610 20:15:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:41.610 20:15:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:41.610 20:15:54 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:06:41.610 20:15:54 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:41.610 20:15:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:41.610 20:15:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:41.610 20:15:54 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:06:41.610 20:15:54 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:41.610 20:15:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:41.610 20:15:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:41.610 20:15:54 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:41.610 20:15:54 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:41.610 20:15:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:41.610 20:15:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:41.610 20:15:54 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:41.610 20:15:54 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:41.610 20:15:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:41.610 20:15:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:42.547 20:15:55 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:42.547 20:15:55 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:42.547 20:15:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:42.547 20:15:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:42.547 20:15:55 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:42.547 20:15:55 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:42.547 20:15:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:42.547 20:15:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:42.547 20:15:55 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:42.547 20:15:55 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:42.547 20:15:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:42.547 20:15:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:42.547 20:15:55 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:42.547 20:15:55 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:42.547 20:15:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:42.547 20:15:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:42.547 20:15:55 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:42.547 20:15:55 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:42.547 20:15:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:42.547 20:15:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:42.547 20:15:55 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:42.547 20:15:55 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:42.547 20:15:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:42.547 20:15:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:42.547 20:15:55 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:42.547 20:15:55 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:42.547 20:15:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:42.547 20:15:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:42.547 20:15:55 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:42.547 20:15:55 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:42.547 20:15:55 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:42.547 00:06:42.547 real 0m1.346s 00:06:42.547 user 0m1.243s 00:06:42.547 sys 0m0.117s 00:06:42.547 20:15:55 accel.accel_decomp_mthread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:42.547 20:15:55 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:06:42.547 ************************************ 00:06:42.547 END TEST accel_decomp_mthread 00:06:42.547 ************************************ 00:06:42.807 20:15:55 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:42.807 20:15:55 accel -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:06:42.807 20:15:55 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:42.807 20:15:55 accel -- common/autotest_common.sh@10 -- # set +x 00:06:42.807 ************************************ 00:06:42.807 START TEST accel_decomp_full_mthread 00:06:42.807 ************************************ 00:06:42.807 20:15:55 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:42.807 20:15:55 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:06:42.807 20:15:55 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:06:42.807 20:15:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:42.807 20:15:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:42.807 20:15:55 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:42.807 20:15:55 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:42.807 20:15:55 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:06:42.807 20:15:55 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:42.807 20:15:55 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:42.807 20:15:55 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:42.807 20:15:55 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:42.807 20:15:55 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:42.807 20:15:55 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:06:42.807 20:15:55 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:06:42.807 [2024-05-16 20:15:55.607432] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:06:42.807 [2024-05-16 20:15:55.607499] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2895761 ] 00:06:42.807 EAL: No free 2048 kB hugepages reported on node 1 00:06:42.807 [2024-05-16 20:15:55.668389] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.807 [2024-05-16 20:15:55.740682] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.807 20:15:55 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:42.807 20:15:55 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:42.807 20:15:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:42.807 20:15:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:42.807 20:15:55 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:42.807 20:15:55 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:42.807 20:15:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:42.807 20:15:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:42.807 20:15:55 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:42.807 20:15:55 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:42.807 20:15:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:42.807 20:15:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:42.807 20:15:55 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:06:42.807 20:15:55 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:42.807 20:15:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:42.807 20:15:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:42.807 20:15:55 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:42.807 20:15:55 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:42.807 20:15:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:42.807 20:15:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:42.807 20:15:55 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:42.807 20:15:55 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:42.807 20:15:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:42.807 20:15:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:42.807 20:15:55 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:06:42.807 20:15:55 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:42.807 20:15:55 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:42.807 20:15:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:42.807 20:15:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:42.807 20:15:55 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:42.807 20:15:55 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:42.807 20:15:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:42.807 20:15:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:42.807 20:15:55 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:42.807 20:15:55 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:42.807 20:15:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:42.807 20:15:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:42.807 20:15:55 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:06:42.807 20:15:55 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:42.807 20:15:55 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:06:42.807 20:15:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:42.807 20:15:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:42.807 20:15:55 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:06:42.807 20:15:55 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:42.807 20:15:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:42.807 20:15:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:42.807 20:15:55 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:06:42.807 20:15:55 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:42.807 20:15:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:42.807 20:15:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:42.807 20:15:55 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:06:42.807 20:15:55 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:42.807 20:15:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:42.807 20:15:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:42.807 20:15:55 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:06:42.807 20:15:55 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:42.807 20:15:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:42.807 20:15:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:42.807 20:15:55 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:06:42.807 20:15:55 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:42.807 20:15:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:42.807 20:15:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:42.807 20:15:55 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:06:42.807 20:15:55 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:42.807 20:15:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:42.807 20:15:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:42.807 20:15:55 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:43.065 20:15:55 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:43.065 20:15:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:43.065 20:15:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:43.065 20:15:55 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:43.065 20:15:55 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:43.065 20:15:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:43.065 20:15:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:44.001 20:15:56 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:44.001 20:15:56 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:44.001 20:15:56 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:44.001 20:15:56 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:44.001 20:15:56 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:44.001 20:15:56 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:44.001 20:15:56 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:44.001 20:15:56 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:44.001 20:15:56 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:44.001 20:15:56 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:44.001 20:15:56 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:44.001 20:15:56 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:44.001 20:15:56 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:44.001 20:15:56 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:44.001 20:15:56 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:44.001 20:15:56 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:44.001 20:15:56 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:44.001 20:15:56 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:44.001 20:15:56 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:44.001 20:15:56 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:44.001 20:15:56 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:44.001 20:15:56 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:44.001 20:15:56 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:44.001 20:15:56 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:44.001 20:15:56 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:44.001 20:15:56 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:44.001 20:15:56 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:44.001 20:15:56 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:44.001 20:15:56 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:44.001 20:15:56 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:44.001 20:15:56 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:44.001 00:06:44.001 real 0m1.367s 00:06:44.001 user 0m1.262s 00:06:44.001 sys 0m0.119s 00:06:44.001 20:15:56 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:44.001 20:15:56 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:06:44.001 ************************************ 00:06:44.001 END TEST accel_decomp_full_mthread 00:06:44.001 ************************************ 00:06:44.001 20:15:56 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:06:44.001 20:15:56 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:44.001 20:15:56 accel -- accel/accel.sh@137 -- # build_accel_config 00:06:44.001 20:15:56 accel -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:06:44.001 20:15:56 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:44.001 20:15:56 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:44.001 20:15:56 accel -- common/autotest_common.sh@10 -- # set +x 00:06:44.001 20:15:56 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:44.001 20:15:56 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:44.001 20:15:56 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:44.001 20:15:56 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:44.001 20:15:56 accel -- accel/accel.sh@40 -- # local IFS=, 00:06:44.001 20:15:56 accel -- accel/accel.sh@41 -- # jq -r . 00:06:44.261 ************************************ 00:06:44.261 START TEST accel_dif_functional_tests 00:06:44.261 ************************************ 00:06:44.261 20:15:57 accel.accel_dif_functional_tests -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:44.261 [2024-05-16 20:15:57.058708] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:06:44.261 [2024-05-16 20:15:57.058741] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2896009 ] 00:06:44.261 EAL: No free 2048 kB hugepages reported on node 1 00:06:44.261 [2024-05-16 20:15:57.116862] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:44.261 [2024-05-16 20:15:57.190134] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:44.261 [2024-05-16 20:15:57.190237] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:44.261 [2024-05-16 20:15:57.190239] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.520 00:06:44.520 00:06:44.520 CUnit - A unit testing framework for C - Version 2.1-3 00:06:44.520 http://cunit.sourceforge.net/ 00:06:44.520 00:06:44.520 00:06:44.520 Suite: accel_dif 00:06:44.520 Test: verify: DIF generated, GUARD check ...passed 00:06:44.520 Test: verify: DIF generated, APPTAG check ...passed 00:06:44.520 Test: verify: DIF generated, REFTAG check ...passed 00:06:44.520 Test: verify: DIF not generated, GUARD check ...[2024-05-16 20:15:57.258710] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:44.520 passed 00:06:44.520 Test: verify: DIF not generated, APPTAG check ...[2024-05-16 20:15:57.258752] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:44.520 passed 00:06:44.520 Test: verify: DIF not generated, REFTAG check ...[2024-05-16 20:15:57.258787] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:44.520 passed 00:06:44.520 Test: verify: APPTAG correct, APPTAG check ...passed 00:06:44.520 Test: verify: APPTAG incorrect, APPTAG check ...[2024-05-16 20:15:57.258828] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:06:44.520 passed 00:06:44.520 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:06:44.520 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:06:44.520 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:06:44.520 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-05-16 20:15:57.258924] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:06:44.520 passed 00:06:44.520 Test: verify copy: DIF generated, GUARD check ...passed 00:06:44.520 Test: verify copy: DIF generated, APPTAG check ...passed 00:06:44.520 Test: verify copy: DIF generated, REFTAG check ...passed 00:06:44.520 Test: verify copy: DIF not generated, GUARD check ...[2024-05-16 20:15:57.259031] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:44.520 passed 00:06:44.520 Test: verify copy: DIF not generated, APPTAG check ...[2024-05-16 20:15:57.259052] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:44.520 passed 00:06:44.520 Test: verify copy: DIF not generated, REFTAG check ...[2024-05-16 20:15:57.259070] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:44.520 passed 00:06:44.520 Test: generate copy: DIF generated, GUARD check ...passed 00:06:44.520 Test: generate copy: DIF generated, APTTAG check ...passed 00:06:44.520 Test: generate copy: DIF generated, REFTAG check ...passed 00:06:44.520 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:06:44.520 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:06:44.520 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:06:44.520 Test: generate copy: iovecs-len validate ...[2024-05-16 20:15:57.259231] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:06:44.520 passed 00:06:44.520 Test: generate copy: buffer alignment validate ...passed 00:06:44.520 00:06:44.520 Run Summary: Type Total Ran Passed Failed Inactive 00:06:44.520 suites 1 1 n/a 0 0 00:06:44.520 tests 26 26 26 0 0 00:06:44.520 asserts 115 115 115 0 n/a 00:06:44.520 00:06:44.520 Elapsed time = 0.000 seconds 00:06:44.520 00:06:44.520 real 0m0.407s 00:06:44.520 user 0m0.619s 00:06:44.520 sys 0m0.140s 00:06:44.520 20:15:57 accel.accel_dif_functional_tests -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:44.520 20:15:57 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:06:44.520 ************************************ 00:06:44.520 END TEST accel_dif_functional_tests 00:06:44.520 ************************************ 00:06:44.520 00:06:44.520 real 0m30.947s 00:06:44.520 user 0m34.541s 00:06:44.520 sys 0m4.253s 00:06:44.520 20:15:57 accel -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:44.520 20:15:57 accel -- common/autotest_common.sh@10 -- # set +x 00:06:44.520 ************************************ 00:06:44.520 END TEST accel 00:06:44.520 ************************************ 00:06:44.520 20:15:57 -- spdk/autotest.sh@184 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/accel_rpc.sh 00:06:44.520 20:15:57 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:44.520 20:15:57 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:44.520 20:15:57 -- common/autotest_common.sh@10 -- # set +x 00:06:44.779 ************************************ 00:06:44.779 START TEST accel_rpc 00:06:44.779 ************************************ 00:06:44.779 20:15:57 accel_rpc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/accel_rpc.sh 00:06:44.779 * Looking for test storage... 00:06:44.779 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel 00:06:44.779 20:15:57 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:44.779 20:15:57 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=2896267 00:06:44.779 20:15:57 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 2896267 00:06:44.779 20:15:57 accel_rpc -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:06:44.779 20:15:57 accel_rpc -- common/autotest_common.sh@827 -- # '[' -z 2896267 ']' 00:06:44.779 20:15:57 accel_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:44.779 20:15:57 accel_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:44.779 20:15:57 accel_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:44.779 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:44.779 20:15:57 accel_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:44.779 20:15:57 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:44.779 [2024-05-16 20:15:57.663979] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:06:44.779 [2024-05-16 20:15:57.664031] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2896267 ] 00:06:44.779 EAL: No free 2048 kB hugepages reported on node 1 00:06:44.779 [2024-05-16 20:15:57.722399] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.038 [2024-05-16 20:15:57.803910] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.606 20:15:58 accel_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:45.606 20:15:58 accel_rpc -- common/autotest_common.sh@860 -- # return 0 00:06:45.606 20:15:58 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:06:45.606 20:15:58 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:06:45.606 20:15:58 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:06:45.606 20:15:58 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:06:45.606 20:15:58 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:06:45.606 20:15:58 accel_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:45.606 20:15:58 accel_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:45.606 20:15:58 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:45.606 ************************************ 00:06:45.606 START TEST accel_assign_opcode 00:06:45.606 ************************************ 00:06:45.606 20:15:58 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1121 -- # accel_assign_opcode_test_suite 00:06:45.606 20:15:58 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:06:45.606 20:15:58 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:45.606 20:15:58 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:45.606 [2024-05-16 20:15:58.513981] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:06:45.606 20:15:58 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:45.606 20:15:58 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:06:45.606 20:15:58 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:45.606 20:15:58 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:45.606 [2024-05-16 20:15:58.521993] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:06:45.606 20:15:58 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:45.606 20:15:58 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:06:45.606 20:15:58 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:45.606 20:15:58 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:45.865 20:15:58 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:45.865 20:15:58 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:06:45.865 20:15:58 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:06:45.865 20:15:58 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:45.865 20:15:58 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:06:45.865 20:15:58 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:45.865 20:15:58 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:45.865 software 00:06:45.865 00:06:45.865 real 0m0.229s 00:06:45.865 user 0m0.045s 00:06:45.865 sys 0m0.011s 00:06:45.865 20:15:58 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:45.865 20:15:58 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:45.865 ************************************ 00:06:45.865 END TEST accel_assign_opcode 00:06:45.865 ************************************ 00:06:45.865 20:15:58 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 2896267 00:06:45.865 20:15:58 accel_rpc -- common/autotest_common.sh@946 -- # '[' -z 2896267 ']' 00:06:45.865 20:15:58 accel_rpc -- common/autotest_common.sh@950 -- # kill -0 2896267 00:06:45.865 20:15:58 accel_rpc -- common/autotest_common.sh@951 -- # uname 00:06:45.865 20:15:58 accel_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:45.865 20:15:58 accel_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2896267 00:06:45.865 20:15:58 accel_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:45.865 20:15:58 accel_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:45.865 20:15:58 accel_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2896267' 00:06:45.865 killing process with pid 2896267 00:06:45.865 20:15:58 accel_rpc -- common/autotest_common.sh@965 -- # kill 2896267 00:06:45.865 20:15:58 accel_rpc -- common/autotest_common.sh@970 -- # wait 2896267 00:06:46.433 00:06:46.433 real 0m1.589s 00:06:46.433 user 0m1.678s 00:06:46.433 sys 0m0.424s 00:06:46.433 20:15:59 accel_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:46.433 20:15:59 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:46.433 ************************************ 00:06:46.433 END TEST accel_rpc 00:06:46.433 ************************************ 00:06:46.433 20:15:59 -- spdk/autotest.sh@185 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/cmdline.sh 00:06:46.433 20:15:59 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:46.433 20:15:59 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:46.433 20:15:59 -- common/autotest_common.sh@10 -- # set +x 00:06:46.434 ************************************ 00:06:46.434 START TEST app_cmdline 00:06:46.434 ************************************ 00:06:46.434 20:15:59 app_cmdline -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/cmdline.sh 00:06:46.434 * Looking for test storage... 00:06:46.434 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app 00:06:46.434 20:15:59 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:46.434 20:15:59 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=2896600 00:06:46.434 20:15:59 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 2896600 00:06:46.434 20:15:59 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:46.434 20:15:59 app_cmdline -- common/autotest_common.sh@827 -- # '[' -z 2896600 ']' 00:06:46.434 20:15:59 app_cmdline -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:46.434 20:15:59 app_cmdline -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:46.434 20:15:59 app_cmdline -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:46.434 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:46.434 20:15:59 app_cmdline -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:46.434 20:15:59 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:46.434 [2024-05-16 20:15:59.330865] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:06:46.434 [2024-05-16 20:15:59.330911] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2896600 ] 00:06:46.434 EAL: No free 2048 kB hugepages reported on node 1 00:06:46.434 [2024-05-16 20:15:59.391971] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.692 [2024-05-16 20:15:59.465438] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.260 20:16:00 app_cmdline -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:47.260 20:16:00 app_cmdline -- common/autotest_common.sh@860 -- # return 0 00:06:47.260 20:16:00 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:06:47.519 { 00:06:47.519 "version": "SPDK v24.09-pre git sha1 cf8ec7cfe", 00:06:47.519 "fields": { 00:06:47.519 "major": 24, 00:06:47.519 "minor": 9, 00:06:47.519 "patch": 0, 00:06:47.519 "suffix": "-pre", 00:06:47.519 "commit": "cf8ec7cfe" 00:06:47.519 } 00:06:47.519 } 00:06:47.519 20:16:00 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:47.519 20:16:00 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:47.519 20:16:00 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:47.519 20:16:00 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:47.519 20:16:00 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:47.519 20:16:00 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:47.519 20:16:00 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:47.519 20:16:00 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:47.519 20:16:00 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:47.519 20:16:00 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:47.519 20:16:00 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:47.519 20:16:00 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:47.519 20:16:00 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:47.519 20:16:00 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:06:47.519 20:16:00 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:47.519 20:16:00 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:06:47.519 20:16:00 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:47.519 20:16:00 app_cmdline -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:06:47.519 20:16:00 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:47.519 20:16:00 app_cmdline -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:06:47.519 20:16:00 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:47.519 20:16:00 app_cmdline -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:06:47.519 20:16:00 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:06:47.519 20:16:00 app_cmdline -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:47.778 request: 00:06:47.778 { 00:06:47.778 "method": "env_dpdk_get_mem_stats", 00:06:47.778 "req_id": 1 00:06:47.778 } 00:06:47.778 Got JSON-RPC error response 00:06:47.778 response: 00:06:47.778 { 00:06:47.778 "code": -32601, 00:06:47.778 "message": "Method not found" 00:06:47.778 } 00:06:47.778 20:16:00 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:06:47.778 20:16:00 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:47.778 20:16:00 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:47.778 20:16:00 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:47.778 20:16:00 app_cmdline -- app/cmdline.sh@1 -- # killprocess 2896600 00:06:47.778 20:16:00 app_cmdline -- common/autotest_common.sh@946 -- # '[' -z 2896600 ']' 00:06:47.778 20:16:00 app_cmdline -- common/autotest_common.sh@950 -- # kill -0 2896600 00:06:47.778 20:16:00 app_cmdline -- common/autotest_common.sh@951 -- # uname 00:06:47.778 20:16:00 app_cmdline -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:47.778 20:16:00 app_cmdline -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2896600 00:06:47.778 20:16:00 app_cmdline -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:47.778 20:16:00 app_cmdline -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:47.778 20:16:00 app_cmdline -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2896600' 00:06:47.778 killing process with pid 2896600 00:06:47.778 20:16:00 app_cmdline -- common/autotest_common.sh@965 -- # kill 2896600 00:06:47.778 20:16:00 app_cmdline -- common/autotest_common.sh@970 -- # wait 2896600 00:06:48.037 00:06:48.037 real 0m1.686s 00:06:48.037 user 0m2.018s 00:06:48.038 sys 0m0.437s 00:06:48.038 20:16:00 app_cmdline -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:48.038 20:16:00 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:48.038 ************************************ 00:06:48.038 END TEST app_cmdline 00:06:48.038 ************************************ 00:06:48.038 20:16:00 -- spdk/autotest.sh@186 -- # run_test version /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/version.sh 00:06:48.038 20:16:00 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:48.038 20:16:00 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:48.038 20:16:00 -- common/autotest_common.sh@10 -- # set +x 00:06:48.038 ************************************ 00:06:48.038 START TEST version 00:06:48.038 ************************************ 00:06:48.038 20:16:00 version -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/version.sh 00:06:48.297 * Looking for test storage... 00:06:48.297 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app 00:06:48.297 20:16:01 version -- app/version.sh@17 -- # get_header_version major 00:06:48.297 20:16:01 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/version.h 00:06:48.297 20:16:01 version -- app/version.sh@14 -- # cut -f2 00:06:48.297 20:16:01 version -- app/version.sh@14 -- # tr -d '"' 00:06:48.297 20:16:01 version -- app/version.sh@17 -- # major=24 00:06:48.297 20:16:01 version -- app/version.sh@18 -- # get_header_version minor 00:06:48.297 20:16:01 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/version.h 00:06:48.297 20:16:01 version -- app/version.sh@14 -- # cut -f2 00:06:48.297 20:16:01 version -- app/version.sh@14 -- # tr -d '"' 00:06:48.297 20:16:01 version -- app/version.sh@18 -- # minor=9 00:06:48.297 20:16:01 version -- app/version.sh@19 -- # get_header_version patch 00:06:48.297 20:16:01 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/version.h 00:06:48.297 20:16:01 version -- app/version.sh@14 -- # cut -f2 00:06:48.297 20:16:01 version -- app/version.sh@14 -- # tr -d '"' 00:06:48.297 20:16:01 version -- app/version.sh@19 -- # patch=0 00:06:48.297 20:16:01 version -- app/version.sh@20 -- # get_header_version suffix 00:06:48.297 20:16:01 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/version.h 00:06:48.297 20:16:01 version -- app/version.sh@14 -- # cut -f2 00:06:48.297 20:16:01 version -- app/version.sh@14 -- # tr -d '"' 00:06:48.297 20:16:01 version -- app/version.sh@20 -- # suffix=-pre 00:06:48.297 20:16:01 version -- app/version.sh@22 -- # version=24.9 00:06:48.297 20:16:01 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:48.297 20:16:01 version -- app/version.sh@28 -- # version=24.9rc0 00:06:48.297 20:16:01 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python 00:06:48.297 20:16:01 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:48.297 20:16:01 version -- app/version.sh@30 -- # py_version=24.9rc0 00:06:48.297 20:16:01 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:06:48.297 00:06:48.297 real 0m0.161s 00:06:48.297 user 0m0.086s 00:06:48.297 sys 0m0.114s 00:06:48.297 20:16:01 version -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:48.297 20:16:01 version -- common/autotest_common.sh@10 -- # set +x 00:06:48.297 ************************************ 00:06:48.297 END TEST version 00:06:48.297 ************************************ 00:06:48.297 20:16:01 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 00:06:48.297 20:16:01 -- spdk/autotest.sh@198 -- # uname -s 00:06:48.297 20:16:01 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:06:48.297 20:16:01 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:06:48.297 20:16:01 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:06:48.297 20:16:01 -- spdk/autotest.sh@211 -- # '[' 0 -eq 1 ']' 00:06:48.297 20:16:01 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:06:48.297 20:16:01 -- spdk/autotest.sh@260 -- # timing_exit lib 00:06:48.297 20:16:01 -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:48.297 20:16:01 -- common/autotest_common.sh@10 -- # set +x 00:06:48.297 20:16:01 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:06:48.297 20:16:01 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:06:48.297 20:16:01 -- spdk/autotest.sh@279 -- # '[' 1 -eq 1 ']' 00:06:48.297 20:16:01 -- spdk/autotest.sh@280 -- # export NET_TYPE 00:06:48.297 20:16:01 -- spdk/autotest.sh@283 -- # '[' rdma = rdma ']' 00:06:48.297 20:16:01 -- spdk/autotest.sh@284 -- # run_test nvmf_rdma /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=rdma 00:06:48.297 20:16:01 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:06:48.297 20:16:01 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:48.297 20:16:01 -- common/autotest_common.sh@10 -- # set +x 00:06:48.297 ************************************ 00:06:48.297 START TEST nvmf_rdma 00:06:48.297 ************************************ 00:06:48.297 20:16:01 nvmf_rdma -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=rdma 00:06:48.297 * Looking for test storage... 00:06:48.556 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf 00:06:48.556 20:16:01 nvmf_rdma -- nvmf/nvmf.sh@10 -- # uname -s 00:06:48.556 20:16:01 nvmf_rdma -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:48.556 20:16:01 nvmf_rdma -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:06:48.556 20:16:01 nvmf_rdma -- nvmf/common.sh@7 -- # uname -s 00:06:48.556 20:16:01 nvmf_rdma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:48.556 20:16:01 nvmf_rdma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:48.556 20:16:01 nvmf_rdma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:48.556 20:16:01 nvmf_rdma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:48.556 20:16:01 nvmf_rdma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:48.556 20:16:01 nvmf_rdma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:48.556 20:16:01 nvmf_rdma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:48.556 20:16:01 nvmf_rdma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:48.556 20:16:01 nvmf_rdma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:48.556 20:16:01 nvmf_rdma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:48.556 20:16:01 nvmf_rdma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:06:48.556 20:16:01 nvmf_rdma -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:06:48.556 20:16:01 nvmf_rdma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:48.556 20:16:01 nvmf_rdma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:48.556 20:16:01 nvmf_rdma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:48.556 20:16:01 nvmf_rdma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:48.556 20:16:01 nvmf_rdma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:06:48.556 20:16:01 nvmf_rdma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:48.556 20:16:01 nvmf_rdma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:48.556 20:16:01 nvmf_rdma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:48.556 20:16:01 nvmf_rdma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:48.556 20:16:01 nvmf_rdma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:48.556 20:16:01 nvmf_rdma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:48.556 20:16:01 nvmf_rdma -- paths/export.sh@5 -- # export PATH 00:06:48.556 20:16:01 nvmf_rdma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:48.556 20:16:01 nvmf_rdma -- nvmf/common.sh@47 -- # : 0 00:06:48.556 20:16:01 nvmf_rdma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:48.556 20:16:01 nvmf_rdma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:48.556 20:16:01 nvmf_rdma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:48.556 20:16:01 nvmf_rdma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:48.556 20:16:01 nvmf_rdma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:48.556 20:16:01 nvmf_rdma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:48.556 20:16:01 nvmf_rdma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:48.556 20:16:01 nvmf_rdma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:48.556 20:16:01 nvmf_rdma -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:06:48.556 20:16:01 nvmf_rdma -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:06:48.556 20:16:01 nvmf_rdma -- nvmf/nvmf.sh@20 -- # timing_enter target 00:06:48.556 20:16:01 nvmf_rdma -- common/autotest_common.sh@720 -- # xtrace_disable 00:06:48.556 20:16:01 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:06:48.556 20:16:01 nvmf_rdma -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:06:48.556 20:16:01 nvmf_rdma -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=rdma 00:06:48.556 20:16:01 nvmf_rdma -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:06:48.556 20:16:01 nvmf_rdma -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:48.556 20:16:01 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:06:48.556 ************************************ 00:06:48.556 START TEST nvmf_example 00:06:48.556 ************************************ 00:06:48.556 20:16:01 nvmf_rdma.nvmf_example -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=rdma 00:06:48.556 * Looking for test storage... 00:06:48.556 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:06:48.556 20:16:01 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:06:48.556 20:16:01 nvmf_rdma.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:06:48.556 20:16:01 nvmf_rdma.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:48.556 20:16:01 nvmf_rdma.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:48.556 20:16:01 nvmf_rdma.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:48.556 20:16:01 nvmf_rdma.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:48.556 20:16:01 nvmf_rdma.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:48.556 20:16:01 nvmf_rdma.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:48.556 20:16:01 nvmf_rdma.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:48.556 20:16:01 nvmf_rdma.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:48.556 20:16:01 nvmf_rdma.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:48.556 20:16:01 nvmf_rdma.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:48.556 20:16:01 nvmf_rdma.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:06:48.557 20:16:01 nvmf_rdma.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:06:48.557 20:16:01 nvmf_rdma.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:48.557 20:16:01 nvmf_rdma.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:48.557 20:16:01 nvmf_rdma.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:48.557 20:16:01 nvmf_rdma.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:48.557 20:16:01 nvmf_rdma.nvmf_example -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:06:48.557 20:16:01 nvmf_rdma.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:48.557 20:16:01 nvmf_rdma.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:48.557 20:16:01 nvmf_rdma.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:48.557 20:16:01 nvmf_rdma.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:48.557 20:16:01 nvmf_rdma.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:48.557 20:16:01 nvmf_rdma.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:48.557 20:16:01 nvmf_rdma.nvmf_example -- paths/export.sh@5 -- # export PATH 00:06:48.557 20:16:01 nvmf_rdma.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:48.557 20:16:01 nvmf_rdma.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:06:48.557 20:16:01 nvmf_rdma.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:48.557 20:16:01 nvmf_rdma.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:48.557 20:16:01 nvmf_rdma.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:48.557 20:16:01 nvmf_rdma.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:48.557 20:16:01 nvmf_rdma.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:48.557 20:16:01 nvmf_rdma.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:48.557 20:16:01 nvmf_rdma.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:48.557 20:16:01 nvmf_rdma.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:48.557 20:16:01 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:06:48.557 20:16:01 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:06:48.557 20:16:01 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:06:48.557 20:16:01 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:06:48.557 20:16:01 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:06:48.557 20:16:01 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:06:48.557 20:16:01 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:06:48.557 20:16:01 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:06:48.557 20:16:01 nvmf_rdma.nvmf_example -- common/autotest_common.sh@720 -- # xtrace_disable 00:06:48.557 20:16:01 nvmf_rdma.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:48.557 20:16:01 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:06:48.557 20:16:01 nvmf_rdma.nvmf_example -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:06:48.557 20:16:01 nvmf_rdma.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:48.557 20:16:01 nvmf_rdma.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:48.557 20:16:01 nvmf_rdma.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:48.557 20:16:01 nvmf_rdma.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:48.557 20:16:01 nvmf_rdma.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:48.557 20:16:01 nvmf_rdma.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:48.557 20:16:01 nvmf_rdma.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:48.557 20:16:01 nvmf_rdma.nvmf_example -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:06:48.557 20:16:01 nvmf_rdma.nvmf_example -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:48.557 20:16:01 nvmf_rdma.nvmf_example -- nvmf/common.sh@285 -- # xtrace_disable 00:06:48.557 20:16:01 nvmf_rdma.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:55.119 20:16:07 nvmf_rdma.nvmf_example -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:55.119 20:16:07 nvmf_rdma.nvmf_example -- nvmf/common.sh@291 -- # pci_devs=() 00:06:55.119 20:16:07 nvmf_rdma.nvmf_example -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:55.119 20:16:07 nvmf_rdma.nvmf_example -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:55.119 20:16:07 nvmf_rdma.nvmf_example -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:55.119 20:16:07 nvmf_rdma.nvmf_example -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:55.120 20:16:07 nvmf_rdma.nvmf_example -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:55.120 20:16:07 nvmf_rdma.nvmf_example -- nvmf/common.sh@295 -- # net_devs=() 00:06:55.120 20:16:07 nvmf_rdma.nvmf_example -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:55.120 20:16:07 nvmf_rdma.nvmf_example -- nvmf/common.sh@296 -- # e810=() 00:06:55.120 20:16:07 nvmf_rdma.nvmf_example -- nvmf/common.sh@296 -- # local -ga e810 00:06:55.120 20:16:07 nvmf_rdma.nvmf_example -- nvmf/common.sh@297 -- # x722=() 00:06:55.120 20:16:07 nvmf_rdma.nvmf_example -- nvmf/common.sh@297 -- # local -ga x722 00:06:55.120 20:16:07 nvmf_rdma.nvmf_example -- nvmf/common.sh@298 -- # mlx=() 00:06:55.120 20:16:07 nvmf_rdma.nvmf_example -- nvmf/common.sh@298 -- # local -ga mlx 00:06:55.120 20:16:07 nvmf_rdma.nvmf_example -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:55.120 20:16:07 nvmf_rdma.nvmf_example -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:55.120 20:16:07 nvmf_rdma.nvmf_example -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:55.120 20:16:07 nvmf_rdma.nvmf_example -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:55.120 20:16:07 nvmf_rdma.nvmf_example -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:55.120 20:16:07 nvmf_rdma.nvmf_example -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:55.120 20:16:07 nvmf_rdma.nvmf_example -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:55.120 20:16:07 nvmf_rdma.nvmf_example -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:55.120 20:16:07 nvmf_rdma.nvmf_example -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:55.120 20:16:07 nvmf_rdma.nvmf_example -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:55.120 20:16:07 nvmf_rdma.nvmf_example -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:55.120 20:16:07 nvmf_rdma.nvmf_example -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:55.120 20:16:07 nvmf_rdma.nvmf_example -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:06:55.120 20:16:07 nvmf_rdma.nvmf_example -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:06:55.120 20:16:07 nvmf_rdma.nvmf_example -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:06:55.120 20:16:07 nvmf_rdma.nvmf_example -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:06:55.120 20:16:07 nvmf_rdma.nvmf_example -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:06:55.120 20:16:07 nvmf_rdma.nvmf_example -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:55.120 20:16:07 nvmf_rdma.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:55.120 20:16:07 nvmf_rdma.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:06:55.120 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:06:55.120 20:16:07 nvmf_rdma.nvmf_example -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:06:55.120 20:16:07 nvmf_rdma.nvmf_example -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:06:55.120 20:16:07 nvmf_rdma.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:06:55.120 20:16:07 nvmf_rdma.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:06:55.120 20:16:07 nvmf_rdma.nvmf_example -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:06:55.120 20:16:07 nvmf_rdma.nvmf_example -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:06:55.120 20:16:07 nvmf_rdma.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:55.120 20:16:07 nvmf_rdma.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:06:55.120 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:06:55.120 20:16:07 nvmf_rdma.nvmf_example -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:06:55.120 20:16:07 nvmf_rdma.nvmf_example -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:06:55.120 20:16:07 nvmf_rdma.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:06:55.120 20:16:07 nvmf_rdma.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:06:55.120 20:16:07 nvmf_rdma.nvmf_example -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:06:55.120 20:16:07 nvmf_rdma.nvmf_example -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:06:55.120 20:16:07 nvmf_rdma.nvmf_example -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:55.120 20:16:07 nvmf_rdma.nvmf_example -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:06:55.120 20:16:07 nvmf_rdma.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:55.120 20:16:07 nvmf_rdma.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:55.120 20:16:07 nvmf_rdma.nvmf_example -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:06:55.120 20:16:07 nvmf_rdma.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:55.120 20:16:07 nvmf_rdma.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:55.120 20:16:07 nvmf_rdma.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:06:55.120 Found net devices under 0000:da:00.0: mlx_0_0 00:06:55.120 20:16:07 nvmf_rdma.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:55.120 20:16:07 nvmf_rdma.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:55.120 20:16:07 nvmf_rdma.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:55.120 20:16:07 nvmf_rdma.nvmf_example -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:06:55.120 20:16:07 nvmf_rdma.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:55.120 20:16:07 nvmf_rdma.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:55.120 20:16:07 nvmf_rdma.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:06:55.120 Found net devices under 0000:da:00.1: mlx_0_1 00:06:55.120 20:16:07 nvmf_rdma.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:55.120 20:16:07 nvmf_rdma.nvmf_example -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:06:55.120 20:16:07 nvmf_rdma.nvmf_example -- nvmf/common.sh@414 -- # is_hw=yes 00:06:55.120 20:16:07 nvmf_rdma.nvmf_example -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:06:55.120 20:16:07 nvmf_rdma.nvmf_example -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:06:55.120 20:16:07 nvmf_rdma.nvmf_example -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:06:55.120 20:16:07 nvmf_rdma.nvmf_example -- nvmf/common.sh@420 -- # rdma_device_init 00:06:55.120 20:16:07 nvmf_rdma.nvmf_example -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:06:55.120 20:16:07 nvmf_rdma.nvmf_example -- nvmf/common.sh@58 -- # uname 00:06:55.120 20:16:07 nvmf_rdma.nvmf_example -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:06:55.120 20:16:07 nvmf_rdma.nvmf_example -- nvmf/common.sh@62 -- # modprobe ib_cm 00:06:55.120 20:16:07 nvmf_rdma.nvmf_example -- nvmf/common.sh@63 -- # modprobe ib_core 00:06:55.120 20:16:07 nvmf_rdma.nvmf_example -- nvmf/common.sh@64 -- # modprobe ib_umad 00:06:55.120 20:16:07 nvmf_rdma.nvmf_example -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:06:55.120 20:16:07 nvmf_rdma.nvmf_example -- nvmf/common.sh@66 -- # modprobe iw_cm 00:06:55.120 20:16:07 nvmf_rdma.nvmf_example -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:06:55.120 20:16:07 nvmf_rdma.nvmf_example -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:06:55.120 20:16:07 nvmf_rdma.nvmf_example -- nvmf/common.sh@502 -- # allocate_nic_ips 00:06:55.120 20:16:07 nvmf_rdma.nvmf_example -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:06:55.120 20:16:07 nvmf_rdma.nvmf_example -- nvmf/common.sh@73 -- # get_rdma_if_list 00:06:55.120 20:16:07 nvmf_rdma.nvmf_example -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:06:55.120 20:16:07 nvmf_rdma.nvmf_example -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:06:55.120 20:16:07 nvmf_rdma.nvmf_example -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:06:55.120 20:16:07 nvmf_rdma.nvmf_example -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:06:55.120 20:16:07 nvmf_rdma.nvmf_example -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:06:55.120 20:16:07 nvmf_rdma.nvmf_example -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:06:55.120 20:16:07 nvmf_rdma.nvmf_example -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:55.120 20:16:07 nvmf_rdma.nvmf_example -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:06:55.120 20:16:07 nvmf_rdma.nvmf_example -- nvmf/common.sh@104 -- # echo mlx_0_0 00:06:55.120 20:16:07 nvmf_rdma.nvmf_example -- nvmf/common.sh@105 -- # continue 2 00:06:55.120 20:16:07 nvmf_rdma.nvmf_example -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:06:55.120 20:16:07 nvmf_rdma.nvmf_example -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:55.120 20:16:07 nvmf_rdma.nvmf_example -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:06:55.120 20:16:07 nvmf_rdma.nvmf_example -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:55.120 20:16:07 nvmf_rdma.nvmf_example -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:06:55.120 20:16:07 nvmf_rdma.nvmf_example -- nvmf/common.sh@104 -- # echo mlx_0_1 00:06:55.120 20:16:07 nvmf_rdma.nvmf_example -- nvmf/common.sh@105 -- # continue 2 00:06:55.120 20:16:07 nvmf_rdma.nvmf_example -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:06:55.120 20:16:07 nvmf_rdma.nvmf_example -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:06:55.120 20:16:07 nvmf_rdma.nvmf_example -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:06:55.120 20:16:07 nvmf_rdma.nvmf_example -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:06:55.120 20:16:07 nvmf_rdma.nvmf_example -- nvmf/common.sh@113 -- # awk '{print $4}' 00:06:55.120 20:16:07 nvmf_rdma.nvmf_example -- nvmf/common.sh@113 -- # cut -d/ -f1 00:06:55.120 20:16:07 nvmf_rdma.nvmf_example -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:06:55.120 20:16:07 nvmf_rdma.nvmf_example -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:06:55.120 20:16:07 nvmf_rdma.nvmf_example -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:06:55.120 258: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:06:55.120 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:06:55.120 altname enp218s0f0np0 00:06:55.120 altname ens818f0np0 00:06:55.120 inet 192.168.100.8/24 scope global mlx_0_0 00:06:55.120 valid_lft forever preferred_lft forever 00:06:55.120 20:16:07 nvmf_rdma.nvmf_example -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:06:55.120 20:16:07 nvmf_rdma.nvmf_example -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:06:55.120 20:16:07 nvmf_rdma.nvmf_example -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:06:55.120 20:16:07 nvmf_rdma.nvmf_example -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:06:55.120 20:16:07 nvmf_rdma.nvmf_example -- nvmf/common.sh@113 -- # awk '{print $4}' 00:06:55.120 20:16:07 nvmf_rdma.nvmf_example -- nvmf/common.sh@113 -- # cut -d/ -f1 00:06:55.120 20:16:07 nvmf_rdma.nvmf_example -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:06:55.120 20:16:07 nvmf_rdma.nvmf_example -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:06:55.120 20:16:07 nvmf_rdma.nvmf_example -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:06:55.120 259: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:06:55.120 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:06:55.120 altname enp218s0f1np1 00:06:55.120 altname ens818f1np1 00:06:55.120 inet 192.168.100.9/24 scope global mlx_0_1 00:06:55.120 valid_lft forever preferred_lft forever 00:06:55.120 20:16:07 nvmf_rdma.nvmf_example -- nvmf/common.sh@422 -- # return 0 00:06:55.120 20:16:07 nvmf_rdma.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:55.120 20:16:07 nvmf_rdma.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:06:55.120 20:16:07 nvmf_rdma.nvmf_example -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:06:55.120 20:16:07 nvmf_rdma.nvmf_example -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:06:55.120 20:16:07 nvmf_rdma.nvmf_example -- nvmf/common.sh@86 -- # get_rdma_if_list 00:06:55.120 20:16:07 nvmf_rdma.nvmf_example -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:06:55.121 20:16:07 nvmf_rdma.nvmf_example -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:06:55.121 20:16:07 nvmf_rdma.nvmf_example -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:06:55.121 20:16:07 nvmf_rdma.nvmf_example -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:06:55.121 20:16:07 nvmf_rdma.nvmf_example -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:06:55.121 20:16:07 nvmf_rdma.nvmf_example -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:06:55.121 20:16:07 nvmf_rdma.nvmf_example -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:55.121 20:16:07 nvmf_rdma.nvmf_example -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:06:55.121 20:16:07 nvmf_rdma.nvmf_example -- nvmf/common.sh@104 -- # echo mlx_0_0 00:06:55.121 20:16:07 nvmf_rdma.nvmf_example -- nvmf/common.sh@105 -- # continue 2 00:06:55.121 20:16:07 nvmf_rdma.nvmf_example -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:06:55.121 20:16:07 nvmf_rdma.nvmf_example -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:55.121 20:16:07 nvmf_rdma.nvmf_example -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:06:55.121 20:16:07 nvmf_rdma.nvmf_example -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:55.121 20:16:07 nvmf_rdma.nvmf_example -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:06:55.121 20:16:07 nvmf_rdma.nvmf_example -- nvmf/common.sh@104 -- # echo mlx_0_1 00:06:55.121 20:16:07 nvmf_rdma.nvmf_example -- nvmf/common.sh@105 -- # continue 2 00:06:55.121 20:16:07 nvmf_rdma.nvmf_example -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:06:55.121 20:16:07 nvmf_rdma.nvmf_example -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:06:55.121 20:16:07 nvmf_rdma.nvmf_example -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:06:55.121 20:16:07 nvmf_rdma.nvmf_example -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:06:55.121 20:16:07 nvmf_rdma.nvmf_example -- nvmf/common.sh@113 -- # awk '{print $4}' 00:06:55.121 20:16:07 nvmf_rdma.nvmf_example -- nvmf/common.sh@113 -- # cut -d/ -f1 00:06:55.121 20:16:07 nvmf_rdma.nvmf_example -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:06:55.121 20:16:07 nvmf_rdma.nvmf_example -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:06:55.121 20:16:07 nvmf_rdma.nvmf_example -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:06:55.121 20:16:07 nvmf_rdma.nvmf_example -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:06:55.121 20:16:07 nvmf_rdma.nvmf_example -- nvmf/common.sh@113 -- # awk '{print $4}' 00:06:55.121 20:16:07 nvmf_rdma.nvmf_example -- nvmf/common.sh@113 -- # cut -d/ -f1 00:06:55.121 20:16:07 nvmf_rdma.nvmf_example -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:06:55.121 192.168.100.9' 00:06:55.121 20:16:07 nvmf_rdma.nvmf_example -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:06:55.121 192.168.100.9' 00:06:55.121 20:16:07 nvmf_rdma.nvmf_example -- nvmf/common.sh@457 -- # head -n 1 00:06:55.121 20:16:07 nvmf_rdma.nvmf_example -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:06:55.121 20:16:07 nvmf_rdma.nvmf_example -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:06:55.121 192.168.100.9' 00:06:55.121 20:16:07 nvmf_rdma.nvmf_example -- nvmf/common.sh@458 -- # tail -n +2 00:06:55.121 20:16:07 nvmf_rdma.nvmf_example -- nvmf/common.sh@458 -- # head -n 1 00:06:55.121 20:16:07 nvmf_rdma.nvmf_example -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:06:55.121 20:16:07 nvmf_rdma.nvmf_example -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:06:55.121 20:16:07 nvmf_rdma.nvmf_example -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:06:55.121 20:16:07 nvmf_rdma.nvmf_example -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:06:55.121 20:16:07 nvmf_rdma.nvmf_example -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:06:55.121 20:16:07 nvmf_rdma.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:06:55.121 20:16:07 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:06:55.121 20:16:07 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:06:55.121 20:16:07 nvmf_rdma.nvmf_example -- common/autotest_common.sh@720 -- # xtrace_disable 00:06:55.121 20:16:07 nvmf_rdma.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:55.121 20:16:07 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@29 -- # '[' rdma == tcp ']' 00:06:55.121 20:16:07 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:06:55.121 20:16:07 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=2900528 00:06:55.121 20:16:07 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:06:55.121 20:16:07 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 2900528 00:06:55.121 20:16:07 nvmf_rdma.nvmf_example -- common/autotest_common.sh@827 -- # '[' -z 2900528 ']' 00:06:55.121 20:16:07 nvmf_rdma.nvmf_example -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:55.121 20:16:07 nvmf_rdma.nvmf_example -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:55.121 20:16:07 nvmf_rdma.nvmf_example -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:55.121 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:55.121 20:16:07 nvmf_rdma.nvmf_example -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:55.121 20:16:07 nvmf_rdma.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:55.121 EAL: No free 2048 kB hugepages reported on node 1 00:06:55.688 20:16:08 nvmf_rdma.nvmf_example -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:55.688 20:16:08 nvmf_rdma.nvmf_example -- common/autotest_common.sh@860 -- # return 0 00:06:55.688 20:16:08 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:06:55.688 20:16:08 nvmf_rdma.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:55.688 20:16:08 nvmf_rdma.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:55.688 20:16:08 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:06:55.688 20:16:08 nvmf_rdma.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:55.688 20:16:08 nvmf_rdma.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:55.948 20:16:08 nvmf_rdma.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:55.948 20:16:08 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:06:55.948 20:16:08 nvmf_rdma.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:55.948 20:16:08 nvmf_rdma.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:55.948 20:16:08 nvmf_rdma.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:55.948 20:16:08 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:06:55.948 20:16:08 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:55.948 20:16:08 nvmf_rdma.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:55.948 20:16:08 nvmf_rdma.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:55.948 20:16:08 nvmf_rdma.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:55.948 20:16:08 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:06:55.948 20:16:08 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:06:55.948 20:16:08 nvmf_rdma.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:55.948 20:16:08 nvmf_rdma.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:55.948 20:16:08 nvmf_rdma.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:55.948 20:16:08 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:06:55.948 20:16:08 nvmf_rdma.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:55.948 20:16:08 nvmf_rdma.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:55.948 20:16:08 nvmf_rdma.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:55.948 20:16:08 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:06:55.948 20:16:08 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:06:55.948 EAL: No free 2048 kB hugepages reported on node 1 00:07:08.156 Initializing NVMe Controllers 00:07:08.156 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:07:08.156 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:07:08.156 Initialization complete. Launching workers. 00:07:08.156 ======================================================== 00:07:08.156 Latency(us) 00:07:08.156 Device Information : IOPS MiB/s Average min max 00:07:08.156 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 24803.80 96.89 2579.92 641.92 14987.11 00:07:08.157 ======================================================== 00:07:08.157 Total : 24803.80 96.89 2579.92 641.92 14987.11 00:07:08.157 00:07:08.157 20:16:20 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:07:08.157 20:16:20 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:07:08.157 20:16:20 nvmf_rdma.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:08.157 20:16:20 nvmf_rdma.nvmf_example -- nvmf/common.sh@117 -- # sync 00:07:08.157 20:16:20 nvmf_rdma.nvmf_example -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:07:08.157 20:16:20 nvmf_rdma.nvmf_example -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:07:08.157 20:16:20 nvmf_rdma.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:07:08.157 20:16:20 nvmf_rdma.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:08.157 20:16:20 nvmf_rdma.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:07:08.157 rmmod nvme_rdma 00:07:08.157 rmmod nvme_fabrics 00:07:08.157 20:16:20 nvmf_rdma.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:08.157 20:16:20 nvmf_rdma.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:07:08.157 20:16:20 nvmf_rdma.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:07:08.157 20:16:20 nvmf_rdma.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 2900528 ']' 00:07:08.157 20:16:20 nvmf_rdma.nvmf_example -- nvmf/common.sh@490 -- # killprocess 2900528 00:07:08.157 20:16:20 nvmf_rdma.nvmf_example -- common/autotest_common.sh@946 -- # '[' -z 2900528 ']' 00:07:08.157 20:16:20 nvmf_rdma.nvmf_example -- common/autotest_common.sh@950 -- # kill -0 2900528 00:07:08.157 20:16:20 nvmf_rdma.nvmf_example -- common/autotest_common.sh@951 -- # uname 00:07:08.157 20:16:20 nvmf_rdma.nvmf_example -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:08.157 20:16:20 nvmf_rdma.nvmf_example -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2900528 00:07:08.157 20:16:20 nvmf_rdma.nvmf_example -- common/autotest_common.sh@952 -- # process_name=nvmf 00:07:08.157 20:16:20 nvmf_rdma.nvmf_example -- common/autotest_common.sh@956 -- # '[' nvmf = sudo ']' 00:07:08.157 20:16:20 nvmf_rdma.nvmf_example -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2900528' 00:07:08.157 killing process with pid 2900528 00:07:08.157 20:16:20 nvmf_rdma.nvmf_example -- common/autotest_common.sh@965 -- # kill 2900528 00:07:08.157 20:16:20 nvmf_rdma.nvmf_example -- common/autotest_common.sh@970 -- # wait 2900528 00:07:08.157 [2024-05-16 20:16:20.266113] rdma.c:2885:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:07:08.157 nvmf threads initialize successfully 00:07:08.157 bdev subsystem init successfully 00:07:08.157 created a nvmf target service 00:07:08.157 create targets's poll groups done 00:07:08.157 all subsystems of target started 00:07:08.157 nvmf target is running 00:07:08.157 all subsystems of target stopped 00:07:08.157 destroy targets's poll groups done 00:07:08.157 destroyed the nvmf target service 00:07:08.157 bdev subsystem finish successfully 00:07:08.157 nvmf threads destroy successfully 00:07:08.157 20:16:20 nvmf_rdma.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:08.157 20:16:20 nvmf_rdma.nvmf_example -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:07:08.157 20:16:20 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:07:08.157 20:16:20 nvmf_rdma.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:08.157 20:16:20 nvmf_rdma.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:08.157 00:07:08.157 real 0m19.100s 00:07:08.157 user 0m52.291s 00:07:08.157 sys 0m5.023s 00:07:08.157 20:16:20 nvmf_rdma.nvmf_example -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:08.157 20:16:20 nvmf_rdma.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:08.157 ************************************ 00:07:08.157 END TEST nvmf_example 00:07:08.157 ************************************ 00:07:08.157 20:16:20 nvmf_rdma -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=rdma 00:07:08.157 20:16:20 nvmf_rdma -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:07:08.157 20:16:20 nvmf_rdma -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:08.157 20:16:20 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:07:08.157 ************************************ 00:07:08.157 START TEST nvmf_filesystem 00:07:08.157 ************************************ 00:07:08.157 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=rdma 00:07:08.157 * Looking for test storage... 00:07:08.157 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:07:08.157 20:16:20 nvmf_rdma.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh 00:07:08.157 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:07:08.157 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:07:08.157 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:07:08.157 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:07:08.157 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@38 -- # '[' -z /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output ']' 00:07:08.157 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@43 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/build_config.sh ]] 00:07:08.157 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/build_config.sh 00:07:08.157 20:16:20 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:07:08.157 20:16:20 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:07:08.157 20:16:20 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:07:08.157 20:16:20 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:07:08.157 20:16:20 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:07:08.157 20:16:20 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:07:08.157 20:16:20 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:07:08.157 20:16:20 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:07:08.157 20:16:20 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:07:08.157 20:16:20 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:07:08.157 20:16:20 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:07:08.157 20:16:20 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:07:08.157 20:16:20 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:07:08.157 20:16:20 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:07:08.157 20:16:20 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:07:08.157 20:16:20 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:07:08.157 20:16:20 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:07:08.157 20:16:20 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:07:08.157 20:16:20 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk 00:07:08.157 20:16:20 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:07:08.157 20:16:20 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:07:08.157 20:16:20 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:07:08.157 20:16:20 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:07:08.157 20:16:20 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:07:08.157 20:16:20 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:07:08.157 20:16:20 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:07:08.157 20:16:20 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:07:08.157 20:16:20 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:07:08.157 20:16:20 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:07:08.157 20:16:20 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:07:08.157 20:16:20 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:07:08.157 20:16:20 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:07:08.157 20:16:20 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:07:08.157 20:16:20 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:07:08.157 20:16:20 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:07:08.157 20:16:20 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build 00:07:08.157 20:16:20 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:07:08.157 20:16:20 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:07:08.157 20:16:20 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:07:08.157 20:16:20 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:07:08.157 20:16:20 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:07:08.157 20:16:20 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:07:08.157 20:16:20 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:07:08.157 20:16:20 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:07:08.157 20:16:20 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:07:08.157 20:16:20 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:07:08.157 20:16:20 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:07:08.157 20:16:20 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:07:08.158 20:16:20 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:07:08.158 20:16:20 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:07:08.158 20:16:20 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:07:08.158 20:16:20 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=n 00:07:08.158 20:16:20 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:07:08.158 20:16:20 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:07:08.158 20:16:20 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:07:08.158 20:16:20 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:07:08.158 20:16:20 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:07:08.158 20:16:20 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:07:08.158 20:16:20 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:07:08.158 20:16:20 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:07:08.158 20:16:20 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:07:08.158 20:16:20 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=n 00:07:08.158 20:16:20 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:07:08.158 20:16:20 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:07:08.158 20:16:20 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:07:08.158 20:16:20 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:07:08.158 20:16:20 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=n 00:07:08.158 20:16:20 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:07:08.158 20:16:20 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:07:08.158 20:16:20 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:07:08.158 20:16:20 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:07:08.158 20:16:20 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:07:08.158 20:16:20 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:07:08.158 20:16:20 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:07:08.158 20:16:20 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:07:08.158 20:16:20 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:07:08.158 20:16:20 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES= 00:07:08.158 20:16:20 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:07:08.158 20:16:20 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:07:08.158 20:16:20 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:07:08.158 20:16:20 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:07:08.158 20:16:20 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:07:08.158 20:16:20 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:07:08.158 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@53 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/applications.sh 00:07:08.158 20:16:20 nvmf_rdma.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/applications.sh 00:07:08.158 20:16:20 nvmf_rdma.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common 00:07:08.158 20:16:20 nvmf_rdma.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common 00:07:08.158 20:16:20 nvmf_rdma.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:07:08.158 20:16:20 nvmf_rdma.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin 00:07:08.158 20:16:20 nvmf_rdma.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app 00:07:08.158 20:16:20 nvmf_rdma.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples 00:07:08.158 20:16:20 nvmf_rdma.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:07:08.158 20:16:20 nvmf_rdma.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:07:08.158 20:16:20 nvmf_rdma.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:07:08.158 20:16:20 nvmf_rdma.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:07:08.158 20:16:20 nvmf_rdma.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:07:08.158 20:16:20 nvmf_rdma.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:07:08.158 20:16:20 nvmf_rdma.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/config.h ]] 00:07:08.158 20:16:20 nvmf_rdma.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:07:08.158 #define SPDK_CONFIG_H 00:07:08.158 #define SPDK_CONFIG_APPS 1 00:07:08.158 #define SPDK_CONFIG_ARCH native 00:07:08.158 #undef SPDK_CONFIG_ASAN 00:07:08.158 #undef SPDK_CONFIG_AVAHI 00:07:08.158 #undef SPDK_CONFIG_CET 00:07:08.158 #define SPDK_CONFIG_COVERAGE 1 00:07:08.158 #define SPDK_CONFIG_CROSS_PREFIX 00:07:08.158 #undef SPDK_CONFIG_CRYPTO 00:07:08.158 #undef SPDK_CONFIG_CRYPTO_MLX5 00:07:08.158 #undef SPDK_CONFIG_CUSTOMOCF 00:07:08.158 #undef SPDK_CONFIG_DAOS 00:07:08.158 #define SPDK_CONFIG_DAOS_DIR 00:07:08.158 #define SPDK_CONFIG_DEBUG 1 00:07:08.158 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:07:08.158 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build 00:07:08.158 #define SPDK_CONFIG_DPDK_INC_DIR 00:07:08.158 #define SPDK_CONFIG_DPDK_LIB_DIR 00:07:08.158 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:07:08.158 #undef SPDK_CONFIG_DPDK_UADK 00:07:08.158 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk 00:07:08.158 #define SPDK_CONFIG_EXAMPLES 1 00:07:08.158 #undef SPDK_CONFIG_FC 00:07:08.158 #define SPDK_CONFIG_FC_PATH 00:07:08.158 #define SPDK_CONFIG_FIO_PLUGIN 1 00:07:08.158 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:07:08.158 #undef SPDK_CONFIG_FUSE 00:07:08.158 #undef SPDK_CONFIG_FUZZER 00:07:08.158 #define SPDK_CONFIG_FUZZER_LIB 00:07:08.158 #undef SPDK_CONFIG_GOLANG 00:07:08.158 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:07:08.158 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:07:08.158 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:07:08.158 #undef SPDK_CONFIG_HAVE_KEYUTILS 00:07:08.158 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:07:08.158 #undef SPDK_CONFIG_HAVE_LIBBSD 00:07:08.158 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:07:08.158 #define SPDK_CONFIG_IDXD 1 00:07:08.158 #undef SPDK_CONFIG_IDXD_KERNEL 00:07:08.158 #undef SPDK_CONFIG_IPSEC_MB 00:07:08.158 #define SPDK_CONFIG_IPSEC_MB_DIR 00:07:08.158 #define SPDK_CONFIG_ISAL 1 00:07:08.158 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:07:08.158 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:07:08.158 #define SPDK_CONFIG_LIBDIR 00:07:08.158 #undef SPDK_CONFIG_LTO 00:07:08.158 #define SPDK_CONFIG_MAX_LCORES 00:07:08.158 #define SPDK_CONFIG_NVME_CUSE 1 00:07:08.158 #undef SPDK_CONFIG_OCF 00:07:08.158 #define SPDK_CONFIG_OCF_PATH 00:07:08.158 #define SPDK_CONFIG_OPENSSL_PATH 00:07:08.158 #undef SPDK_CONFIG_PGO_CAPTURE 00:07:08.158 #define SPDK_CONFIG_PGO_DIR 00:07:08.158 #undef SPDK_CONFIG_PGO_USE 00:07:08.158 #define SPDK_CONFIG_PREFIX /usr/local 00:07:08.158 #undef SPDK_CONFIG_RAID5F 00:07:08.158 #undef SPDK_CONFIG_RBD 00:07:08.158 #define SPDK_CONFIG_RDMA 1 00:07:08.158 #define SPDK_CONFIG_RDMA_PROV verbs 00:07:08.158 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:07:08.158 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:07:08.158 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:07:08.158 #define SPDK_CONFIG_SHARED 1 00:07:08.158 #undef SPDK_CONFIG_SMA 00:07:08.158 #define SPDK_CONFIG_TESTS 1 00:07:08.158 #undef SPDK_CONFIG_TSAN 00:07:08.158 #define SPDK_CONFIG_UBLK 1 00:07:08.158 #define SPDK_CONFIG_UBSAN 1 00:07:08.158 #undef SPDK_CONFIG_UNIT_TESTS 00:07:08.158 #undef SPDK_CONFIG_URING 00:07:08.158 #define SPDK_CONFIG_URING_PATH 00:07:08.158 #undef SPDK_CONFIG_URING_ZNS 00:07:08.158 #undef SPDK_CONFIG_USDT 00:07:08.158 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:07:08.158 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:07:08.158 #undef SPDK_CONFIG_VFIO_USER 00:07:08.158 #define SPDK_CONFIG_VFIO_USER_DIR 00:07:08.158 #define SPDK_CONFIG_VHOST 1 00:07:08.158 #define SPDK_CONFIG_VIRTIO 1 00:07:08.158 #undef SPDK_CONFIG_VTUNE 00:07:08.158 #define SPDK_CONFIG_VTUNE_DIR 00:07:08.158 #define SPDK_CONFIG_WERROR 1 00:07:08.158 #define SPDK_CONFIG_WPDK_DIR 00:07:08.158 #undef SPDK_CONFIG_XNVME 00:07:08.158 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:07:08.158 20:16:20 nvmf_rdma.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:07:08.158 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:07:08.158 20:16:20 nvmf_rdma.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:08.158 20:16:20 nvmf_rdma.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:08.158 20:16:20 nvmf_rdma.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:08.159 20:16:20 nvmf_rdma.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:08.159 20:16:20 nvmf_rdma.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:08.159 20:16:20 nvmf_rdma.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:08.159 20:16:20 nvmf_rdma.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:07:08.159 20:16:20 nvmf_rdma.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:08.159 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/common 00:07:08.159 20:16:20 nvmf_rdma.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/common 00:07:08.159 20:16:20 nvmf_rdma.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm 00:07:08.159 20:16:20 nvmf_rdma.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm 00:07:08.159 20:16:20 nvmf_rdma.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/../../../ 00:07:08.159 20:16:20 nvmf_rdma.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:07:08.159 20:16:20 nvmf_rdma.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:07:08.159 20:16:20 nvmf_rdma.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-phy-autotest/spdk/.run_test_name 00:07:08.159 20:16:20 nvmf_rdma.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power 00:07:08.159 20:16:20 nvmf_rdma.nvmf_filesystem -- pm/common@68 -- # uname -s 00:07:08.159 20:16:20 nvmf_rdma.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:07:08.159 20:16:20 nvmf_rdma.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:07:08.159 20:16:20 nvmf_rdma.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:07:08.159 20:16:20 nvmf_rdma.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:07:08.159 20:16:20 nvmf_rdma.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:07:08.159 20:16:20 nvmf_rdma.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:07:08.159 20:16:20 nvmf_rdma.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:07:08.159 20:16:20 nvmf_rdma.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:07:08.159 20:16:20 nvmf_rdma.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:07:08.159 20:16:20 nvmf_rdma.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:07:08.159 20:16:20 nvmf_rdma.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:07:08.159 20:16:20 nvmf_rdma.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:07:08.159 20:16:20 nvmf_rdma.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:07:08.159 20:16:20 nvmf_rdma.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:07:08.159 20:16:20 nvmf_rdma.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:07:08.159 20:16:20 nvmf_rdma.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:07:08.159 20:16:20 nvmf_rdma.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power ]] 00:07:08.159 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@57 -- # : 0 00:07:08.159 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@58 -- # export RUN_NIGHTLY 00:07:08.159 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@61 -- # : 0 00:07:08.159 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@62 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:07:08.159 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@63 -- # : 0 00:07:08.159 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@64 -- # export SPDK_RUN_VALGRIND 00:07:08.159 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@65 -- # : 1 00:07:08.159 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@66 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:07:08.159 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@67 -- # : 0 00:07:08.159 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@68 -- # export SPDK_TEST_UNITTEST 00:07:08.159 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@69 -- # : 00:07:08.159 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@70 -- # export SPDK_TEST_AUTOBUILD 00:07:08.159 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@71 -- # : 0 00:07:08.159 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@72 -- # export SPDK_TEST_RELEASE_BUILD 00:07:08.159 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@73 -- # : 0 00:07:08.159 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@74 -- # export SPDK_TEST_ISAL 00:07:08.159 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@75 -- # : 0 00:07:08.159 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@76 -- # export SPDK_TEST_ISCSI 00:07:08.159 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@77 -- # : 0 00:07:08.159 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@78 -- # export SPDK_TEST_ISCSI_INITIATOR 00:07:08.159 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@79 -- # : 0 00:07:08.159 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@80 -- # export SPDK_TEST_NVME 00:07:08.159 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@81 -- # : 0 00:07:08.159 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@82 -- # export SPDK_TEST_NVME_PMR 00:07:08.159 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@83 -- # : 0 00:07:08.159 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@84 -- # export SPDK_TEST_NVME_BP 00:07:08.159 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@85 -- # : 1 00:07:08.159 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@86 -- # export SPDK_TEST_NVME_CLI 00:07:08.159 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@87 -- # : 0 00:07:08.159 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@88 -- # export SPDK_TEST_NVME_CUSE 00:07:08.159 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@89 -- # : 0 00:07:08.159 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@90 -- # export SPDK_TEST_NVME_FDP 00:07:08.159 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@91 -- # : 1 00:07:08.159 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@92 -- # export SPDK_TEST_NVMF 00:07:08.159 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@93 -- # : 0 00:07:08.159 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@94 -- # export SPDK_TEST_VFIOUSER 00:07:08.159 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@95 -- # : 0 00:07:08.159 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@96 -- # export SPDK_TEST_VFIOUSER_QEMU 00:07:08.159 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@97 -- # : 0 00:07:08.159 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@98 -- # export SPDK_TEST_FUZZER 00:07:08.159 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@99 -- # : 0 00:07:08.159 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@100 -- # export SPDK_TEST_FUZZER_SHORT 00:07:08.159 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@101 -- # : rdma 00:07:08.159 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@102 -- # export SPDK_TEST_NVMF_TRANSPORT 00:07:08.159 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@103 -- # : 0 00:07:08.159 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@104 -- # export SPDK_TEST_RBD 00:07:08.159 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@105 -- # : 0 00:07:08.159 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@106 -- # export SPDK_TEST_VHOST 00:07:08.159 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@107 -- # : 0 00:07:08.159 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@108 -- # export SPDK_TEST_BLOCKDEV 00:07:08.159 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@109 -- # : 0 00:07:08.159 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@110 -- # export SPDK_TEST_IOAT 00:07:08.159 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@111 -- # : 0 00:07:08.159 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@112 -- # export SPDK_TEST_BLOBFS 00:07:08.159 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@113 -- # : 0 00:07:08.159 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@114 -- # export SPDK_TEST_VHOST_INIT 00:07:08.159 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@115 -- # : 0 00:07:08.159 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@116 -- # export SPDK_TEST_LVOL 00:07:08.159 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@117 -- # : 0 00:07:08.159 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@118 -- # export SPDK_TEST_VBDEV_COMPRESS 00:07:08.159 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@119 -- # : 0 00:07:08.159 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@120 -- # export SPDK_RUN_ASAN 00:07:08.159 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@121 -- # : 1 00:07:08.159 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@122 -- # export SPDK_RUN_UBSAN 00:07:08.160 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@123 -- # : 00:07:08.160 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@124 -- # export SPDK_RUN_EXTERNAL_DPDK 00:07:08.160 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@125 -- # : 0 00:07:08.160 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@126 -- # export SPDK_RUN_NON_ROOT 00:07:08.160 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@127 -- # : 0 00:07:08.160 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@128 -- # export SPDK_TEST_CRYPTO 00:07:08.160 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@129 -- # : 0 00:07:08.160 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@130 -- # export SPDK_TEST_FTL 00:07:08.160 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@131 -- # : 0 00:07:08.160 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@132 -- # export SPDK_TEST_OCF 00:07:08.160 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@133 -- # : 0 00:07:08.160 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@134 -- # export SPDK_TEST_VMD 00:07:08.160 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@135 -- # : 0 00:07:08.160 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@136 -- # export SPDK_TEST_OPAL 00:07:08.160 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@137 -- # : 00:07:08.160 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@138 -- # export SPDK_TEST_NATIVE_DPDK 00:07:08.160 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@139 -- # : true 00:07:08.160 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@140 -- # export SPDK_AUTOTEST_X 00:07:08.160 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@141 -- # : 0 00:07:08.160 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@142 -- # export SPDK_TEST_RAID5 00:07:08.160 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@143 -- # : 0 00:07:08.160 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@144 -- # export SPDK_TEST_URING 00:07:08.160 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@145 -- # : 0 00:07:08.160 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@146 -- # export SPDK_TEST_USDT 00:07:08.160 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@147 -- # : 0 00:07:08.160 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@148 -- # export SPDK_TEST_USE_IGB_UIO 00:07:08.160 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@149 -- # : 0 00:07:08.160 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@150 -- # export SPDK_TEST_SCHEDULER 00:07:08.160 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@151 -- # : 0 00:07:08.160 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@152 -- # export SPDK_TEST_SCANBUILD 00:07:08.160 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@153 -- # : mlx5 00:07:08.160 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@154 -- # export SPDK_TEST_NVMF_NICS 00:07:08.160 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@155 -- # : 0 00:07:08.160 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@156 -- # export SPDK_TEST_SMA 00:07:08.160 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@157 -- # : 0 00:07:08.160 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@158 -- # export SPDK_TEST_DAOS 00:07:08.160 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@159 -- # : 0 00:07:08.160 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@160 -- # export SPDK_TEST_XNVME 00:07:08.160 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@161 -- # : 0 00:07:08.160 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@162 -- # export SPDK_TEST_ACCEL_DSA 00:07:08.160 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@163 -- # : 0 00:07:08.160 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@164 -- # export SPDK_TEST_ACCEL_IAA 00:07:08.160 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 00:07:08.160 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_FUZZER_TARGET 00:07:08.160 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@168 -- # : 0 00:07:08.160 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@169 -- # export SPDK_TEST_NVMF_MDNS 00:07:08.160 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@170 -- # : 0 00:07:08.160 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@171 -- # export SPDK_JSONRPC_GO_CLIENT 00:07:08.160 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib 00:07:08.160 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@174 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib 00:07:08.160 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@175 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib 00:07:08.160 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@175 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib 00:07:08.160 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@176 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:08.160 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@176 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:08.160 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@177 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:08.160 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@177 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:08.160 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@180 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:07:08.160 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@180 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:07:08.160 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@184 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python 00:07:08.160 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@184 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python 00:07:08.160 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@188 -- # export PYTHONDONTWRITEBYTECODE=1 00:07:08.160 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@188 -- # PYTHONDONTWRITEBYTECODE=1 00:07:08.160 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@192 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:08.160 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@192 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:08.160 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@193 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:08.160 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@193 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:08.160 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@197 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:07:08.160 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@198 -- # rm -rf /var/tmp/asan_suppression_file 00:07:08.160 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@199 -- # cat 00:07:08.160 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@235 -- # echo leak:libfuse3.so 00:07:08.160 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@237 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:08.160 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@237 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:08.160 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@239 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:08.160 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@239 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:08.160 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@241 -- # '[' -z /var/spdk/dependencies ']' 00:07:08.160 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@244 -- # export DEPENDENCY_DIR 00:07:08.160 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@248 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin 00:07:08.160 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@248 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin 00:07:08.160 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@249 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples 00:07:08.160 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@249 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples 00:07:08.160 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@252 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:08.160 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@252 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:08.160 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@253 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:08.160 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@253 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:08.160 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@255 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:08.161 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@255 -- # AR_TOOL=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:08.161 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@258 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:08.161 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@258 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:08.161 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@261 -- # '[' 0 -eq 0 ']' 00:07:08.161 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@262 -- # export valgrind= 00:07:08.161 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@262 -- # valgrind= 00:07:08.161 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@268 -- # uname -s 00:07:08.161 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@268 -- # '[' Linux = Linux ']' 00:07:08.161 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@269 -- # HUGEMEM=4096 00:07:08.161 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@270 -- # export CLEAR_HUGE=yes 00:07:08.161 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@270 -- # CLEAR_HUGE=yes 00:07:08.161 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@271 -- # [[ 0 -eq 1 ]] 00:07:08.161 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@271 -- # [[ 0 -eq 1 ]] 00:07:08.161 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@278 -- # MAKE=make 00:07:08.161 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@279 -- # MAKEFLAGS=-j96 00:07:08.161 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@295 -- # export HUGEMEM=4096 00:07:08.161 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@295 -- # HUGEMEM=4096 00:07:08.161 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@297 -- # NO_HUGE=() 00:07:08.161 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@298 -- # TEST_MODE= 00:07:08.161 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@299 -- # for i in "$@" 00:07:08.161 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@300 -- # case "$i" in 00:07:08.161 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@305 -- # TEST_TRANSPORT=rdma 00:07:08.161 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@317 -- # [[ -z 2902682 ]] 00:07:08.161 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@317 -- # kill -0 2902682 00:07:08.161 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@1676 -- # set_test_storage 2147483648 00:07:08.161 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@327 -- # [[ -v testdir ]] 00:07:08.161 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@329 -- # local requested_size=2147483648 00:07:08.161 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@330 -- # local mount target_dir 00:07:08.161 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@332 -- # local -A mounts fss sizes avails uses 00:07:08.161 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@333 -- # local source fs size avail mount use 00:07:08.161 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@335 -- # local storage_fallback storage_candidates 00:07:08.161 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@337 -- # mktemp -udt spdk.XXXXXX 00:07:08.161 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@337 -- # storage_fallback=/tmp/spdk.I0gWBN 00:07:08.161 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@342 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:07:08.161 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@344 -- # [[ -n '' ]] 00:07:08.161 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@349 -- # [[ -n '' ]] 00:07:08.161 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@354 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target /tmp/spdk.I0gWBN/tests/target /tmp/spdk.I0gWBN 00:07:08.161 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@357 -- # requested_size=2214592512 00:07:08.161 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:08.161 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@326 -- # df -T 00:07:08.161 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@326 -- # grep -v Filesystem 00:07:08.161 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=spdk_devtmpfs 00:07:08.161 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=devtmpfs 00:07:08.161 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=67108864 00:07:08.161 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=67108864 00:07:08.161 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=0 00:07:08.161 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:08.161 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=/dev/pmem0 00:07:08.161 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=ext2 00:07:08.161 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=1052192768 00:07:08.161 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=5284429824 00:07:08.161 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=4232237056 00:07:08.161 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:08.161 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=spdk_root 00:07:08.161 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=overlay 00:07:08.161 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=183332081664 00:07:08.161 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=195974324224 00:07:08.161 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=12642242560 00:07:08.161 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:08.161 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:07:08.161 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:07:08.161 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=97931526144 00:07:08.161 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=97987162112 00:07:08.161 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=55635968 00:07:08.161 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:08.161 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:07:08.161 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:07:08.161 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=39185281024 00:07:08.161 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=39194865664 00:07:08.161 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=9584640 00:07:08.161 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:08.161 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:07:08.161 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:07:08.161 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=97984802816 00:07:08.161 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=97987162112 00:07:08.161 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=2359296 00:07:08.161 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:08.161 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:07:08.161 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:07:08.161 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=19597426688 00:07:08.161 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=19597430784 00:07:08.161 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=4096 00:07:08.161 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:08.161 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@365 -- # printf '* Looking for test storage...\n' 00:07:08.161 * Looking for test storage... 00:07:08.161 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@367 -- # local target_space new_size 00:07:08.161 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@368 -- # for target_dir in "${storage_candidates[@]}" 00:07:08.161 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@371 -- # df /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:07:08.161 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@371 -- # awk '$1 !~ /Filesystem/{print $6}' 00:07:08.161 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@371 -- # mount=/ 00:07:08.161 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@373 -- # target_space=183332081664 00:07:08.161 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@374 -- # (( target_space == 0 || target_space < requested_size )) 00:07:08.161 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@377 -- # (( target_space >= requested_size )) 00:07:08.162 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@379 -- # [[ overlay == tmpfs ]] 00:07:08.162 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@379 -- # [[ overlay == ramfs ]] 00:07:08.162 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@379 -- # [[ / == / ]] 00:07:08.162 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@380 -- # new_size=14856835072 00:07:08.162 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@381 -- # (( new_size * 100 / sizes[/] > 95 )) 00:07:08.162 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@386 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:07:08.162 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@386 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:07:08.162 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@387 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:07:08.162 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:07:08.162 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@388 -- # return 0 00:07:08.162 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@1678 -- # set -o errtrace 00:07:08.162 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@1679 -- # shopt -s extdebug 00:07:08.162 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@1680 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:07:08.162 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@1682 -- # PS4=' \t $test_domain -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:07:08.162 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@1683 -- # true 00:07:08.162 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@1685 -- # xtrace_fd 00:07:08.162 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:07:08.162 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:07:08.162 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:07:08.162 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:07:08.162 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:07:08.162 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:07:08.162 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:07:08.162 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:07:08.162 20:16:20 nvmf_rdma.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:07:08.162 20:16:20 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:07:08.162 20:16:20 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:08.162 20:16:20 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:08.162 20:16:20 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:08.162 20:16:20 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:08.162 20:16:20 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:08.162 20:16:20 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:08.162 20:16:20 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:08.162 20:16:20 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:08.162 20:16:20 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:08.162 20:16:20 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:08.162 20:16:20 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:07:08.162 20:16:20 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:07:08.162 20:16:20 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:08.162 20:16:20 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:08.162 20:16:20 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:08.162 20:16:20 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:08.162 20:16:20 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:07:08.162 20:16:20 nvmf_rdma.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:08.162 20:16:20 nvmf_rdma.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:08.162 20:16:20 nvmf_rdma.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:08.162 20:16:20 nvmf_rdma.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:08.162 20:16:20 nvmf_rdma.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:08.162 20:16:20 nvmf_rdma.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:08.162 20:16:20 nvmf_rdma.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:07:08.162 20:16:20 nvmf_rdma.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:08.162 20:16:20 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:07:08.162 20:16:20 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:08.162 20:16:20 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:08.162 20:16:20 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:08.162 20:16:20 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:08.162 20:16:20 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:08.162 20:16:20 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:08.162 20:16:20 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:08.162 20:16:20 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:08.162 20:16:20 nvmf_rdma.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:07:08.162 20:16:20 nvmf_rdma.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:07:08.162 20:16:20 nvmf_rdma.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:07:08.162 20:16:20 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:07:08.162 20:16:20 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:08.162 20:16:20 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:08.162 20:16:20 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:08.162 20:16:20 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:08.162 20:16:20 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:08.162 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:08.162 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:08.162 20:16:20 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:08.162 20:16:20 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:08.162 20:16:20 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@285 -- # xtrace_disable 00:07:08.162 20:16:20 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:14.735 20:16:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:14.735 20:16:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@291 -- # pci_devs=() 00:07:14.735 20:16:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:14.735 20:16:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:14.735 20:16:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:14.735 20:16:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:14.735 20:16:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:14.735 20:16:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@295 -- # net_devs=() 00:07:14.735 20:16:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:14.735 20:16:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@296 -- # e810=() 00:07:14.735 20:16:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@296 -- # local -ga e810 00:07:14.735 20:16:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@297 -- # x722=() 00:07:14.735 20:16:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@297 -- # local -ga x722 00:07:14.735 20:16:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@298 -- # mlx=() 00:07:14.735 20:16:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@298 -- # local -ga mlx 00:07:14.735 20:16:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:14.735 20:16:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:14.735 20:16:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:14.735 20:16:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:14.735 20:16:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:14.735 20:16:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:14.735 20:16:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:14.735 20:16:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:14.735 20:16:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:14.735 20:16:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:14.735 20:16:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:14.735 20:16:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:14.735 20:16:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:07:14.735 20:16:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:07:14.735 20:16:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:07:14.735 20:16:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:07:14.735 20:16:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:07:14.735 20:16:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:14.735 20:16:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:14.735 20:16:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:07:14.735 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:07:14.735 20:16:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:07:14.736 20:16:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:07:14.736 20:16:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:07:14.736 20:16:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:07:14.736 20:16:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:07:14.736 20:16:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:07:14.736 20:16:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:14.736 20:16:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:07:14.736 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:07:14.736 20:16:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:07:14.736 20:16:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:07:14.736 20:16:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:07:14.736 20:16:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:07:14.736 20:16:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:07:14.736 20:16:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:07:14.736 20:16:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:14.736 20:16:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:07:14.736 20:16:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:14.736 20:16:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:14.736 20:16:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:07:14.736 20:16:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:14.736 20:16:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:14.736 20:16:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:07:14.736 Found net devices under 0000:da:00.0: mlx_0_0 00:07:14.736 20:16:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:14.736 20:16:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:14.736 20:16:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:14.736 20:16:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:07:14.736 20:16:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:14.736 20:16:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:14.736 20:16:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:07:14.736 Found net devices under 0000:da:00.1: mlx_0_1 00:07:14.736 20:16:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:14.736 20:16:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:14.736 20:16:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@414 -- # is_hw=yes 00:07:14.736 20:16:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:14.736 20:16:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:07:14.736 20:16:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:07:14.736 20:16:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@420 -- # rdma_device_init 00:07:14.736 20:16:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:07:14.736 20:16:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@58 -- # uname 00:07:14.736 20:16:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:07:14.736 20:16:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@62 -- # modprobe ib_cm 00:07:14.736 20:16:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@63 -- # modprobe ib_core 00:07:14.736 20:16:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@64 -- # modprobe ib_umad 00:07:14.736 20:16:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:07:14.736 20:16:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@66 -- # modprobe iw_cm 00:07:14.736 20:16:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:07:14.736 20:16:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:07:14.736 20:16:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@502 -- # allocate_nic_ips 00:07:14.736 20:16:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:07:14.736 20:16:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@73 -- # get_rdma_if_list 00:07:14.736 20:16:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:07:14.736 20:16:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:07:14.736 20:16:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:07:14.736 20:16:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:07:14.736 20:16:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:07:14.736 20:16:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:07:14.736 20:16:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:14.736 20:16:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:07:14.736 20:16:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@104 -- # echo mlx_0_0 00:07:14.736 20:16:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@105 -- # continue 2 00:07:14.736 20:16:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:07:14.736 20:16:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:14.736 20:16:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:07:14.736 20:16:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:14.736 20:16:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:07:14.736 20:16:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@104 -- # echo mlx_0_1 00:07:14.736 20:16:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@105 -- # continue 2 00:07:14.736 20:16:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:07:14.736 20:16:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:07:14.736 20:16:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:07:14.736 20:16:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:07:14.736 20:16:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@113 -- # awk '{print $4}' 00:07:14.736 20:16:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@113 -- # cut -d/ -f1 00:07:14.736 20:16:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:07:14.736 20:16:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:07:14.736 20:16:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:07:14.736 258: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:07:14.736 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:07:14.736 altname enp218s0f0np0 00:07:14.736 altname ens818f0np0 00:07:14.736 inet 192.168.100.8/24 scope global mlx_0_0 00:07:14.736 valid_lft forever preferred_lft forever 00:07:14.736 20:16:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:07:14.736 20:16:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:07:14.736 20:16:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:07:14.736 20:16:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:07:14.736 20:16:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@113 -- # awk '{print $4}' 00:07:14.736 20:16:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@113 -- # cut -d/ -f1 00:07:14.736 20:16:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:07:14.736 20:16:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:07:14.736 20:16:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:07:14.736 259: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:07:14.736 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:07:14.736 altname enp218s0f1np1 00:07:14.736 altname ens818f1np1 00:07:14.736 inet 192.168.100.9/24 scope global mlx_0_1 00:07:14.736 valid_lft forever preferred_lft forever 00:07:14.736 20:16:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@422 -- # return 0 00:07:14.736 20:16:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:14.736 20:16:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:07:14.736 20:16:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:07:14.736 20:16:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:07:14.736 20:16:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@86 -- # get_rdma_if_list 00:07:14.736 20:16:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:07:14.736 20:16:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:07:14.736 20:16:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:07:14.736 20:16:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:07:14.736 20:16:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:07:14.736 20:16:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:07:14.736 20:16:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:14.736 20:16:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:07:14.736 20:16:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@104 -- # echo mlx_0_0 00:07:14.736 20:16:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@105 -- # continue 2 00:07:14.736 20:16:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:07:14.736 20:16:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:14.736 20:16:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:07:14.736 20:16:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:14.736 20:16:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:07:14.736 20:16:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@104 -- # echo mlx_0_1 00:07:14.736 20:16:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@105 -- # continue 2 00:07:14.736 20:16:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:07:14.736 20:16:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:07:14.736 20:16:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:07:14.736 20:16:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@113 -- # cut -d/ -f1 00:07:14.736 20:16:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:07:14.736 20:16:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@113 -- # awk '{print $4}' 00:07:14.736 20:16:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:07:14.736 20:16:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:07:14.736 20:16:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:07:14.736 20:16:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:07:14.736 20:16:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@113 -- # awk '{print $4}' 00:07:14.736 20:16:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@113 -- # cut -d/ -f1 00:07:14.736 20:16:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:07:14.736 192.168.100.9' 00:07:14.736 20:16:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:07:14.736 192.168.100.9' 00:07:14.736 20:16:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@457 -- # head -n 1 00:07:14.736 20:16:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:07:14.736 20:16:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:07:14.736 192.168.100.9' 00:07:14.736 20:16:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@458 -- # tail -n +2 00:07:14.736 20:16:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@458 -- # head -n 1 00:07:14.736 20:16:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:07:14.737 20:16:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:07:14.737 20:16:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:07:14.737 20:16:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:07:14.737 20:16:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:07:14.737 20:16:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:07:14.737 20:16:26 nvmf_rdma.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:07:14.737 20:16:26 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:07:14.737 20:16:26 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:14.737 20:16:26 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:14.737 ************************************ 00:07:14.737 START TEST nvmf_filesystem_no_in_capsule 00:07:14.737 ************************************ 00:07:14.737 20:16:26 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1121 -- # nvmf_filesystem_part 0 00:07:14.737 20:16:26 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:07:14.737 20:16:26 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:07:14.737 20:16:26 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:14.737 20:16:26 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@720 -- # xtrace_disable 00:07:14.737 20:16:26 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:14.737 20:16:26 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=2906231 00:07:14.737 20:16:26 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 2906231 00:07:14.737 20:16:26 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@827 -- # '[' -z 2906231 ']' 00:07:14.737 20:16:26 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:14.737 20:16:26 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:14.737 20:16:26 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:14.737 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:14.737 20:16:26 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:14.737 20:16:26 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:14.737 20:16:26 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:14.737 [2024-05-16 20:16:26.850947] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:07:14.737 [2024-05-16 20:16:26.850993] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:14.737 EAL: No free 2048 kB hugepages reported on node 1 00:07:14.737 [2024-05-16 20:16:26.911803] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:14.737 [2024-05-16 20:16:26.992901] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:14.737 [2024-05-16 20:16:26.992937] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:14.737 [2024-05-16 20:16:26.992944] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:14.737 [2024-05-16 20:16:26.992949] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:14.737 [2024-05-16 20:16:26.992954] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:14.737 [2024-05-16 20:16:26.993014] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:14.737 [2024-05-16 20:16:26.993106] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:14.737 [2024-05-16 20:16:26.993196] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:14.737 [2024-05-16 20:16:26.993197] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.737 20:16:27 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:14.737 20:16:27 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@860 -- # return 0 00:07:14.737 20:16:27 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:14.737 20:16:27 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:14.737 20:16:27 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:14.737 20:16:27 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:14.737 20:16:27 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:07:14.737 20:16:27 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -c 0 00:07:14.737 20:16:27 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:14.737 20:16:27 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:14.737 [2024-05-16 20:16:27.692322] rdma.c:2726:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:07:14.737 [2024-05-16 20:16:27.713820] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x23139b0/0x2317ea0) succeed. 00:07:14.737 [2024-05-16 20:16:27.724246] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x2314ff0/0x2359530) succeed. 00:07:14.996 20:16:27 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:14.996 20:16:27 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:07:14.996 20:16:27 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:14.996 20:16:27 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:14.996 Malloc1 00:07:14.996 20:16:27 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:14.996 20:16:27 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:14.996 20:16:27 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:14.996 20:16:27 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:14.996 20:16:27 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:14.996 20:16:27 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:14.996 20:16:27 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:14.996 20:16:27 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:14.996 20:16:27 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:14.996 20:16:27 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:07:14.996 20:16:27 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:14.996 20:16:27 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:14.996 [2024-05-16 20:16:27.967757] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:07:14.996 [2024-05-16 20:16:27.968147] rdma.c:3032:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:07:14.996 20:16:27 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:14.996 20:16:27 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:07:14.996 20:16:27 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1374 -- # local bdev_name=Malloc1 00:07:14.996 20:16:27 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1375 -- # local bdev_info 00:07:14.996 20:16:27 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1376 -- # local bs 00:07:14.996 20:16:27 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1377 -- # local nb 00:07:14.996 20:16:27 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:07:14.996 20:16:27 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:14.996 20:16:27 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:15.254 20:16:27 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:15.254 20:16:27 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:07:15.254 { 00:07:15.254 "name": "Malloc1", 00:07:15.254 "aliases": [ 00:07:15.254 "c81f886e-5add-4bc1-ac02-df1a36aaa0a4" 00:07:15.254 ], 00:07:15.254 "product_name": "Malloc disk", 00:07:15.254 "block_size": 512, 00:07:15.254 "num_blocks": 1048576, 00:07:15.254 "uuid": "c81f886e-5add-4bc1-ac02-df1a36aaa0a4", 00:07:15.254 "assigned_rate_limits": { 00:07:15.254 "rw_ios_per_sec": 0, 00:07:15.254 "rw_mbytes_per_sec": 0, 00:07:15.254 "r_mbytes_per_sec": 0, 00:07:15.254 "w_mbytes_per_sec": 0 00:07:15.254 }, 00:07:15.254 "claimed": true, 00:07:15.254 "claim_type": "exclusive_write", 00:07:15.254 "zoned": false, 00:07:15.254 "supported_io_types": { 00:07:15.254 "read": true, 00:07:15.254 "write": true, 00:07:15.254 "unmap": true, 00:07:15.254 "write_zeroes": true, 00:07:15.254 "flush": true, 00:07:15.254 "reset": true, 00:07:15.254 "compare": false, 00:07:15.254 "compare_and_write": false, 00:07:15.254 "abort": true, 00:07:15.254 "nvme_admin": false, 00:07:15.254 "nvme_io": false 00:07:15.254 }, 00:07:15.254 "memory_domains": [ 00:07:15.254 { 00:07:15.254 "dma_device_id": "system", 00:07:15.254 "dma_device_type": 1 00:07:15.254 }, 00:07:15.254 { 00:07:15.255 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:15.255 "dma_device_type": 2 00:07:15.255 } 00:07:15.255 ], 00:07:15.255 "driver_specific": {} 00:07:15.255 } 00:07:15.255 ]' 00:07:15.255 20:16:27 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:07:15.255 20:16:28 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # bs=512 00:07:15.255 20:16:28 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:07:15.255 20:16:28 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # nb=1048576 00:07:15.255 20:16:28 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bdev_size=512 00:07:15.255 20:16:28 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # echo 512 00:07:15.255 20:16:28 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:07:15.255 20:16:28 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:07:16.214 20:16:29 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:07:16.214 20:16:29 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1194 -- # local i=0 00:07:16.214 20:16:29 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:07:16.214 20:16:29 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:07:16.214 20:16:29 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1201 -- # sleep 2 00:07:18.157 20:16:31 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:07:18.157 20:16:31 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:07:18.157 20:16:31 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:07:18.157 20:16:31 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:07:18.157 20:16:31 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:07:18.157 20:16:31 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # return 0 00:07:18.157 20:16:31 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:07:18.157 20:16:31 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:07:18.157 20:16:31 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:07:18.157 20:16:31 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:07:18.157 20:16:31 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:07:18.157 20:16:31 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:07:18.157 20:16:31 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:07:18.157 20:16:31 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:07:18.157 20:16:31 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:07:18.157 20:16:31 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:07:18.157 20:16:31 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:07:18.157 20:16:31 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:07:18.415 20:16:31 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:07:19.351 20:16:32 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:07:19.351 20:16:32 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:07:19.351 20:16:32 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:07:19.351 20:16:32 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:19.351 20:16:32 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:19.610 ************************************ 00:07:19.610 START TEST filesystem_ext4 00:07:19.610 ************************************ 00:07:19.610 20:16:32 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create ext4 nvme0n1 00:07:19.610 20:16:32 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:07:19.610 20:16:32 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:19.610 20:16:32 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:07:19.610 20:16:32 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@922 -- # local fstype=ext4 00:07:19.610 20:16:32 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:07:19.610 20:16:32 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@924 -- # local i=0 00:07:19.610 20:16:32 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@925 -- # local force 00:07:19.610 20:16:32 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # '[' ext4 = ext4 ']' 00:07:19.610 20:16:32 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@928 -- # force=-F 00:07:19.610 20:16:32 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:07:19.610 mke2fs 1.46.5 (30-Dec-2021) 00:07:19.610 Discarding device blocks: 0/522240 done 00:07:19.610 Creating filesystem with 522240 1k blocks and 130560 inodes 00:07:19.610 Filesystem UUID: a025c59a-198e-45f8-b645-64ccc3c60d6d 00:07:19.610 Superblock backups stored on blocks: 00:07:19.610 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:07:19.610 00:07:19.610 Allocating group tables: 0/64 done 00:07:19.610 Writing inode tables: 0/64 done 00:07:19.610 Creating journal (8192 blocks): done 00:07:19.610 Writing superblocks and filesystem accounting information: 0/64 done 00:07:19.610 00:07:19.610 20:16:32 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@941 -- # return 0 00:07:19.610 20:16:32 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:19.610 20:16:32 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:19.610 20:16:32 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:07:19.610 20:16:32 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:19.610 20:16:32 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:07:19.610 20:16:32 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:07:19.610 20:16:32 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:19.610 20:16:32 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 2906231 00:07:19.610 20:16:32 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:19.610 20:16:32 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:19.610 20:16:32 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:19.610 20:16:32 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:19.610 00:07:19.610 real 0m0.176s 00:07:19.610 user 0m0.026s 00:07:19.610 sys 0m0.063s 00:07:19.610 20:16:32 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:19.610 20:16:32 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:07:19.610 ************************************ 00:07:19.610 END TEST filesystem_ext4 00:07:19.610 ************************************ 00:07:19.610 20:16:32 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:07:19.610 20:16:32 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:07:19.610 20:16:32 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:19.610 20:16:32 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:19.870 ************************************ 00:07:19.870 START TEST filesystem_btrfs 00:07:19.870 ************************************ 00:07:19.870 20:16:32 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create btrfs nvme0n1 00:07:19.870 20:16:32 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:07:19.870 20:16:32 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:19.870 20:16:32 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:07:19.870 20:16:32 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@922 -- # local fstype=btrfs 00:07:19.870 20:16:32 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:07:19.870 20:16:32 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@924 -- # local i=0 00:07:19.870 20:16:32 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@925 -- # local force 00:07:19.870 20:16:32 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # '[' btrfs = ext4 ']' 00:07:19.870 20:16:32 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # force=-f 00:07:19.870 20:16:32 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:07:19.870 btrfs-progs v6.6.2 00:07:19.870 See https://btrfs.readthedocs.io for more information. 00:07:19.870 00:07:19.870 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:07:19.870 NOTE: several default settings have changed in version 5.15, please make sure 00:07:19.870 this does not affect your deployments: 00:07:19.870 - DUP for metadata (-m dup) 00:07:19.870 - enabled no-holes (-O no-holes) 00:07:19.870 - enabled free-space-tree (-R free-space-tree) 00:07:19.870 00:07:19.870 Label: (null) 00:07:19.870 UUID: de14357a-3e1b-4a40-957a-6ae11e186b44 00:07:19.870 Node size: 16384 00:07:19.870 Sector size: 4096 00:07:19.870 Filesystem size: 510.00MiB 00:07:19.870 Block group profiles: 00:07:19.870 Data: single 8.00MiB 00:07:19.870 Metadata: DUP 32.00MiB 00:07:19.870 System: DUP 8.00MiB 00:07:19.870 SSD detected: yes 00:07:19.870 Zoned device: no 00:07:19.870 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:07:19.870 Runtime features: free-space-tree 00:07:19.870 Checksum: crc32c 00:07:19.870 Number of devices: 1 00:07:19.870 Devices: 00:07:19.870 ID SIZE PATH 00:07:19.870 1 510.00MiB /dev/nvme0n1p1 00:07:19.870 00:07:19.870 20:16:32 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@941 -- # return 0 00:07:19.870 20:16:32 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:19.870 20:16:32 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:19.870 20:16:32 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:07:19.870 20:16:32 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:19.870 20:16:32 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:07:19.870 20:16:32 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:07:19.870 20:16:32 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:19.870 20:16:32 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 2906231 00:07:19.870 20:16:32 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:19.870 20:16:32 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:20.131 20:16:32 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:20.131 20:16:32 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:20.131 00:07:20.131 real 0m0.244s 00:07:20.131 user 0m0.025s 00:07:20.131 sys 0m0.125s 00:07:20.131 20:16:32 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:20.131 20:16:32 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:07:20.131 ************************************ 00:07:20.131 END TEST filesystem_btrfs 00:07:20.131 ************************************ 00:07:20.131 20:16:32 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:07:20.131 20:16:32 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:07:20.131 20:16:32 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:20.131 20:16:32 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:20.131 ************************************ 00:07:20.131 START TEST filesystem_xfs 00:07:20.131 ************************************ 00:07:20.131 20:16:32 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create xfs nvme0n1 00:07:20.131 20:16:32 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:07:20.131 20:16:32 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:20.131 20:16:32 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:07:20.131 20:16:32 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@922 -- # local fstype=xfs 00:07:20.131 20:16:32 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:07:20.131 20:16:32 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@924 -- # local i=0 00:07:20.131 20:16:32 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@925 -- # local force 00:07:20.131 20:16:32 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # '[' xfs = ext4 ']' 00:07:20.131 20:16:32 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # force=-f 00:07:20.131 20:16:32 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # mkfs.xfs -f /dev/nvme0n1p1 00:07:20.131 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:07:20.131 = sectsz=512 attr=2, projid32bit=1 00:07:20.131 = crc=1 finobt=1, sparse=1, rmapbt=0 00:07:20.131 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:07:20.131 data = bsize=4096 blocks=130560, imaxpct=25 00:07:20.131 = sunit=0 swidth=0 blks 00:07:20.131 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:07:20.131 log =internal log bsize=4096 blocks=16384, version=2 00:07:20.131 = sectsz=512 sunit=0 blks, lazy-count=1 00:07:20.131 realtime =none extsz=4096 blocks=0, rtextents=0 00:07:20.131 Discarding blocks...Done. 00:07:20.131 20:16:33 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@941 -- # return 0 00:07:20.131 20:16:33 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:20.131 20:16:33 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:20.131 20:16:33 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:07:20.131 20:16:33 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:20.131 20:16:33 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:07:20.131 20:16:33 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:07:20.131 20:16:33 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:20.392 20:16:33 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 2906231 00:07:20.392 20:16:33 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:20.392 20:16:33 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:20.392 20:16:33 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:20.392 20:16:33 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:20.392 00:07:20.392 real 0m0.205s 00:07:20.392 user 0m0.020s 00:07:20.392 sys 0m0.072s 00:07:20.392 20:16:33 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:20.392 20:16:33 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:07:20.392 ************************************ 00:07:20.392 END TEST filesystem_xfs 00:07:20.392 ************************************ 00:07:20.392 20:16:33 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:07:20.392 20:16:33 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:07:20.392 20:16:33 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:21.330 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:21.330 20:16:34 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:21.330 20:16:34 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1215 -- # local i=0 00:07:21.330 20:16:34 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:21.330 20:16:34 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:07:21.330 20:16:34 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:07:21.330 20:16:34 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:21.330 20:16:34 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # return 0 00:07:21.330 20:16:34 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:21.330 20:16:34 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:21.330 20:16:34 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:21.330 20:16:34 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:21.330 20:16:34 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:07:21.330 20:16:34 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 2906231 00:07:21.330 20:16:34 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@946 -- # '[' -z 2906231 ']' 00:07:21.330 20:16:34 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@950 -- # kill -0 2906231 00:07:21.330 20:16:34 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@951 -- # uname 00:07:21.330 20:16:34 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:21.330 20:16:34 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2906231 00:07:21.330 20:16:34 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:21.330 20:16:34 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:21.330 20:16:34 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2906231' 00:07:21.330 killing process with pid 2906231 00:07:21.330 20:16:34 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@965 -- # kill 2906231 00:07:21.330 [2024-05-16 20:16:34.273216] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:07:21.330 20:16:34 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@970 -- # wait 2906231 00:07:21.590 [2024-05-16 20:16:34.330930] rdma.c:2885:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:07:21.849 20:16:34 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:07:21.849 00:07:21.849 real 0m7.868s 00:07:21.849 user 0m30.669s 00:07:21.849 sys 0m1.024s 00:07:21.849 20:16:34 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:21.849 20:16:34 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:21.849 ************************************ 00:07:21.849 END TEST nvmf_filesystem_no_in_capsule 00:07:21.849 ************************************ 00:07:21.849 20:16:34 nvmf_rdma.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:07:21.849 20:16:34 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:07:21.849 20:16:34 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:21.849 20:16:34 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:21.849 ************************************ 00:07:21.849 START TEST nvmf_filesystem_in_capsule 00:07:21.849 ************************************ 00:07:21.849 20:16:34 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1121 -- # nvmf_filesystem_part 4096 00:07:21.849 20:16:34 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:07:21.849 20:16:34 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:07:21.849 20:16:34 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:21.849 20:16:34 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@720 -- # xtrace_disable 00:07:21.849 20:16:34 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:21.849 20:16:34 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=2907626 00:07:21.849 20:16:34 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 2907626 00:07:21.849 20:16:34 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:21.849 20:16:34 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@827 -- # '[' -z 2907626 ']' 00:07:21.849 20:16:34 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:21.849 20:16:34 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:21.849 20:16:34 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:21.849 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:21.849 20:16:34 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:21.849 20:16:34 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:21.849 [2024-05-16 20:16:34.793879] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:07:21.849 [2024-05-16 20:16:34.793916] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:21.849 EAL: No free 2048 kB hugepages reported on node 1 00:07:22.108 [2024-05-16 20:16:34.853769] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:22.108 [2024-05-16 20:16:34.934060] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:22.108 [2024-05-16 20:16:34.934098] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:22.108 [2024-05-16 20:16:34.934105] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:22.108 [2024-05-16 20:16:34.934110] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:22.109 [2024-05-16 20:16:34.934115] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:22.109 [2024-05-16 20:16:34.934158] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:22.109 [2024-05-16 20:16:34.934244] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:22.109 [2024-05-16 20:16:34.934334] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:22.109 [2024-05-16 20:16:34.934335] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.677 20:16:35 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:22.677 20:16:35 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@860 -- # return 0 00:07:22.677 20:16:35 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:22.677 20:16:35 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:22.677 20:16:35 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:22.677 20:16:35 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:22.677 20:16:35 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:07:22.677 20:16:35 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -c 4096 00:07:22.677 20:16:35 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:22.677 20:16:35 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:22.677 [2024-05-16 20:16:35.668883] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x5f19b0/0x5f5ea0) succeed. 00:07:22.935 [2024-05-16 20:16:35.679243] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x5f2ff0/0x637530) succeed. 00:07:22.935 20:16:35 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:22.935 20:16:35 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:07:22.935 20:16:35 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:22.935 20:16:35 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:22.935 Malloc1 00:07:22.935 20:16:35 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:22.935 20:16:35 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:22.935 20:16:35 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:22.935 20:16:35 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:23.193 20:16:35 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:23.193 20:16:35 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:23.193 20:16:35 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:23.193 20:16:35 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:23.193 20:16:35 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:23.194 20:16:35 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:07:23.194 20:16:35 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:23.194 20:16:35 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:23.194 [2024-05-16 20:16:35.944218] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:07:23.194 [2024-05-16 20:16:35.944621] rdma.c:3032:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:07:23.194 20:16:35 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:23.194 20:16:35 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:07:23.194 20:16:35 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1374 -- # local bdev_name=Malloc1 00:07:23.194 20:16:35 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1375 -- # local bdev_info 00:07:23.194 20:16:35 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1376 -- # local bs 00:07:23.194 20:16:35 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1377 -- # local nb 00:07:23.194 20:16:35 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:07:23.194 20:16:35 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:23.194 20:16:35 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:23.194 20:16:35 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:23.194 20:16:35 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:07:23.194 { 00:07:23.194 "name": "Malloc1", 00:07:23.194 "aliases": [ 00:07:23.194 "4891e268-6789-425d-9ed6-ea07dd8309cc" 00:07:23.194 ], 00:07:23.194 "product_name": "Malloc disk", 00:07:23.194 "block_size": 512, 00:07:23.194 "num_blocks": 1048576, 00:07:23.194 "uuid": "4891e268-6789-425d-9ed6-ea07dd8309cc", 00:07:23.194 "assigned_rate_limits": { 00:07:23.194 "rw_ios_per_sec": 0, 00:07:23.194 "rw_mbytes_per_sec": 0, 00:07:23.194 "r_mbytes_per_sec": 0, 00:07:23.194 "w_mbytes_per_sec": 0 00:07:23.194 }, 00:07:23.194 "claimed": true, 00:07:23.194 "claim_type": "exclusive_write", 00:07:23.194 "zoned": false, 00:07:23.194 "supported_io_types": { 00:07:23.194 "read": true, 00:07:23.194 "write": true, 00:07:23.194 "unmap": true, 00:07:23.194 "write_zeroes": true, 00:07:23.194 "flush": true, 00:07:23.194 "reset": true, 00:07:23.194 "compare": false, 00:07:23.194 "compare_and_write": false, 00:07:23.194 "abort": true, 00:07:23.194 "nvme_admin": false, 00:07:23.194 "nvme_io": false 00:07:23.194 }, 00:07:23.194 "memory_domains": [ 00:07:23.194 { 00:07:23.194 "dma_device_id": "system", 00:07:23.194 "dma_device_type": 1 00:07:23.194 }, 00:07:23.194 { 00:07:23.194 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:23.194 "dma_device_type": 2 00:07:23.194 } 00:07:23.194 ], 00:07:23.194 "driver_specific": {} 00:07:23.194 } 00:07:23.194 ]' 00:07:23.194 20:16:35 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:07:23.194 20:16:36 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # bs=512 00:07:23.194 20:16:36 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:07:23.194 20:16:36 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # nb=1048576 00:07:23.194 20:16:36 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bdev_size=512 00:07:23.194 20:16:36 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # echo 512 00:07:23.194 20:16:36 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:07:23.194 20:16:36 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:07:24.130 20:16:37 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:07:24.130 20:16:37 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1194 -- # local i=0 00:07:24.130 20:16:37 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:07:24.130 20:16:37 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:07:24.130 20:16:37 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1201 -- # sleep 2 00:07:26.033 20:16:39 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:07:26.033 20:16:39 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:07:26.033 20:16:39 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:07:26.291 20:16:39 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:07:26.291 20:16:39 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:07:26.291 20:16:39 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # return 0 00:07:26.291 20:16:39 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:07:26.291 20:16:39 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:07:26.291 20:16:39 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:07:26.291 20:16:39 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:07:26.291 20:16:39 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:07:26.291 20:16:39 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:07:26.291 20:16:39 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:07:26.291 20:16:39 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:07:26.291 20:16:39 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:07:26.291 20:16:39 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:07:26.291 20:16:39 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:07:26.291 20:16:39 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:07:26.548 20:16:39 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:07:27.484 20:16:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:07:27.484 20:16:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:07:27.484 20:16:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:07:27.484 20:16:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:27.484 20:16:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:27.484 ************************************ 00:07:27.484 START TEST filesystem_in_capsule_ext4 00:07:27.484 ************************************ 00:07:27.484 20:16:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create ext4 nvme0n1 00:07:27.484 20:16:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:07:27.484 20:16:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:27.484 20:16:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:07:27.484 20:16:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@922 -- # local fstype=ext4 00:07:27.484 20:16:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:07:27.484 20:16:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@924 -- # local i=0 00:07:27.484 20:16:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@925 -- # local force 00:07:27.484 20:16:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # '[' ext4 = ext4 ']' 00:07:27.484 20:16:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@928 -- # force=-F 00:07:27.484 20:16:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:07:27.484 mke2fs 1.46.5 (30-Dec-2021) 00:07:27.484 Discarding device blocks: 0/522240 done 00:07:27.484 Creating filesystem with 522240 1k blocks and 130560 inodes 00:07:27.484 Filesystem UUID: 6c0e52fe-f2a5-4d0e-9d5d-25304c4479e5 00:07:27.484 Superblock backups stored on blocks: 00:07:27.484 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:07:27.484 00:07:27.484 Allocating group tables: 0/64 done 00:07:27.484 Writing inode tables: 0/64 done 00:07:27.484 Creating journal (8192 blocks): done 00:07:27.484 Writing superblocks and filesystem accounting information: 0/64 done 00:07:27.484 00:07:27.484 20:16:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@941 -- # return 0 00:07:27.484 20:16:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:27.744 20:16:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:27.744 20:16:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:07:27.744 20:16:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:27.744 20:16:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:07:27.744 20:16:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:07:27.744 20:16:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:27.744 20:16:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 2907626 00:07:27.744 20:16:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:27.744 20:16:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:27.744 20:16:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:27.744 20:16:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:27.744 00:07:27.744 real 0m0.174s 00:07:27.744 user 0m0.027s 00:07:27.744 sys 0m0.061s 00:07:27.744 20:16:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:27.744 20:16:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:07:27.744 ************************************ 00:07:27.744 END TEST filesystem_in_capsule_ext4 00:07:27.744 ************************************ 00:07:27.744 20:16:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:07:27.744 20:16:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:07:27.744 20:16:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:27.744 20:16:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:27.744 ************************************ 00:07:27.744 START TEST filesystem_in_capsule_btrfs 00:07:27.744 ************************************ 00:07:27.744 20:16:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create btrfs nvme0n1 00:07:27.744 20:16:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:07:27.744 20:16:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:27.744 20:16:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:07:27.744 20:16:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@922 -- # local fstype=btrfs 00:07:27.744 20:16:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:07:27.744 20:16:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@924 -- # local i=0 00:07:27.744 20:16:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@925 -- # local force 00:07:27.744 20:16:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # '[' btrfs = ext4 ']' 00:07:27.744 20:16:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # force=-f 00:07:27.744 20:16:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:07:27.744 btrfs-progs v6.6.2 00:07:27.744 See https://btrfs.readthedocs.io for more information. 00:07:27.744 00:07:27.744 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:07:27.744 NOTE: several default settings have changed in version 5.15, please make sure 00:07:27.744 this does not affect your deployments: 00:07:27.744 - DUP for metadata (-m dup) 00:07:27.744 - enabled no-holes (-O no-holes) 00:07:27.744 - enabled free-space-tree (-R free-space-tree) 00:07:27.744 00:07:27.744 Label: (null) 00:07:27.744 UUID: c1649441-a397-4fbe-b96a-2f210429f9a4 00:07:27.744 Node size: 16384 00:07:27.744 Sector size: 4096 00:07:27.744 Filesystem size: 510.00MiB 00:07:27.744 Block group profiles: 00:07:27.744 Data: single 8.00MiB 00:07:27.744 Metadata: DUP 32.00MiB 00:07:27.744 System: DUP 8.00MiB 00:07:27.744 SSD detected: yes 00:07:27.744 Zoned device: no 00:07:27.744 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:07:27.744 Runtime features: free-space-tree 00:07:27.744 Checksum: crc32c 00:07:27.744 Number of devices: 1 00:07:27.744 Devices: 00:07:27.744 ID SIZE PATH 00:07:27.744 1 510.00MiB /dev/nvme0n1p1 00:07:27.744 00:07:27.744 20:16:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@941 -- # return 0 00:07:27.744 20:16:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:28.003 20:16:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:28.003 20:16:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:07:28.003 20:16:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:28.003 20:16:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:07:28.003 20:16:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:07:28.003 20:16:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:28.003 20:16:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 2907626 00:07:28.003 20:16:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:28.003 20:16:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:28.003 20:16:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:28.003 20:16:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:28.003 00:07:28.003 real 0m0.238s 00:07:28.003 user 0m0.020s 00:07:28.003 sys 0m0.124s 00:07:28.003 20:16:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:28.003 20:16:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:07:28.003 ************************************ 00:07:28.003 END TEST filesystem_in_capsule_btrfs 00:07:28.003 ************************************ 00:07:28.003 20:16:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:07:28.003 20:16:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:07:28.003 20:16:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:28.003 20:16:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:28.003 ************************************ 00:07:28.003 START TEST filesystem_in_capsule_xfs 00:07:28.003 ************************************ 00:07:28.003 20:16:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create xfs nvme0n1 00:07:28.003 20:16:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:07:28.003 20:16:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:28.003 20:16:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:07:28.003 20:16:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@922 -- # local fstype=xfs 00:07:28.003 20:16:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:07:28.003 20:16:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@924 -- # local i=0 00:07:28.003 20:16:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@925 -- # local force 00:07:28.003 20:16:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # '[' xfs = ext4 ']' 00:07:28.003 20:16:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # force=-f 00:07:28.003 20:16:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # mkfs.xfs -f /dev/nvme0n1p1 00:07:28.262 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:07:28.262 = sectsz=512 attr=2, projid32bit=1 00:07:28.262 = crc=1 finobt=1, sparse=1, rmapbt=0 00:07:28.262 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:07:28.262 data = bsize=4096 blocks=130560, imaxpct=25 00:07:28.262 = sunit=0 swidth=0 blks 00:07:28.262 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:07:28.262 log =internal log bsize=4096 blocks=16384, version=2 00:07:28.262 = sectsz=512 sunit=0 blks, lazy-count=1 00:07:28.262 realtime =none extsz=4096 blocks=0, rtextents=0 00:07:28.262 Discarding blocks...Done. 00:07:28.262 20:16:41 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@941 -- # return 0 00:07:28.262 20:16:41 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:28.262 20:16:41 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:28.262 20:16:41 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:07:28.262 20:16:41 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:28.262 20:16:41 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:07:28.262 20:16:41 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:07:28.262 20:16:41 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:28.262 20:16:41 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 2907626 00:07:28.262 20:16:41 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:28.262 20:16:41 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:28.262 20:16:41 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:28.262 20:16:41 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:28.262 00:07:28.262 real 0m0.215s 00:07:28.262 user 0m0.024s 00:07:28.262 sys 0m0.067s 00:07:28.262 20:16:41 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:28.262 20:16:41 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:07:28.262 ************************************ 00:07:28.262 END TEST filesystem_in_capsule_xfs 00:07:28.262 ************************************ 00:07:28.262 20:16:41 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:07:28.262 20:16:41 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:07:28.262 20:16:41 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:29.198 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:29.198 20:16:42 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:29.199 20:16:42 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1215 -- # local i=0 00:07:29.199 20:16:42 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:07:29.199 20:16:42 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:29.199 20:16:42 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:07:29.199 20:16:42 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:29.199 20:16:42 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # return 0 00:07:29.199 20:16:42 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:29.199 20:16:42 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:29.199 20:16:42 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:29.457 20:16:42 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:29.457 20:16:42 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:07:29.457 20:16:42 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 2907626 00:07:29.457 20:16:42 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@946 -- # '[' -z 2907626 ']' 00:07:29.457 20:16:42 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@950 -- # kill -0 2907626 00:07:29.457 20:16:42 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@951 -- # uname 00:07:29.457 20:16:42 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:29.457 20:16:42 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2907626 00:07:29.457 20:16:42 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:29.457 20:16:42 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:29.457 20:16:42 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2907626' 00:07:29.457 killing process with pid 2907626 00:07:29.457 20:16:42 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@965 -- # kill 2907626 00:07:29.457 [2024-05-16 20:16:42.239157] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:07:29.457 20:16:42 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@970 -- # wait 2907626 00:07:29.457 [2024-05-16 20:16:42.322732] rdma.c:2885:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:07:29.717 20:16:42 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:07:29.717 00:07:29.717 real 0m7.912s 00:07:29.717 user 0m30.810s 00:07:29.717 sys 0m1.071s 00:07:29.717 20:16:42 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:29.717 20:16:42 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:29.717 ************************************ 00:07:29.717 END TEST nvmf_filesystem_in_capsule 00:07:29.717 ************************************ 00:07:29.717 20:16:42 nvmf_rdma.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:07:29.717 20:16:42 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:29.717 20:16:42 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:07:29.717 20:16:42 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:07:29.717 20:16:42 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:07:29.717 20:16:42 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:07:29.717 20:16:42 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:29.717 20:16:42 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:07:29.717 rmmod nvme_rdma 00:07:29.977 rmmod nvme_fabrics 00:07:29.977 20:16:42 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:29.977 20:16:42 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:07:29.977 20:16:42 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:07:29.977 20:16:42 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:07:29.977 20:16:42 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:29.977 20:16:42 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:07:29.977 00:07:29.977 real 0m22.209s 00:07:29.977 user 1m3.377s 00:07:29.977 sys 0m6.771s 00:07:29.977 20:16:42 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:29.977 20:16:42 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:29.977 ************************************ 00:07:29.977 END TEST nvmf_filesystem 00:07:29.977 ************************************ 00:07:29.977 20:16:42 nvmf_rdma -- nvmf/nvmf.sh@25 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=rdma 00:07:29.977 20:16:42 nvmf_rdma -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:07:29.977 20:16:42 nvmf_rdma -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:29.977 20:16:42 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:07:29.977 ************************************ 00:07:29.977 START TEST nvmf_target_discovery 00:07:29.977 ************************************ 00:07:29.977 20:16:42 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=rdma 00:07:29.977 * Looking for test storage... 00:07:29.977 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:07:29.977 20:16:42 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:07:29.977 20:16:42 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:07:29.977 20:16:42 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:29.977 20:16:42 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:29.977 20:16:42 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:29.977 20:16:42 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:29.977 20:16:42 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:29.977 20:16:42 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:29.977 20:16:42 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:29.977 20:16:42 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:29.977 20:16:42 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:29.977 20:16:42 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:29.977 20:16:42 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:07:29.977 20:16:42 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:07:29.977 20:16:42 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:29.977 20:16:42 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:29.977 20:16:42 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:29.977 20:16:42 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:29.977 20:16:42 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:07:29.977 20:16:42 nvmf_rdma.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:29.977 20:16:42 nvmf_rdma.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:29.977 20:16:42 nvmf_rdma.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:29.977 20:16:42 nvmf_rdma.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:29.977 20:16:42 nvmf_rdma.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:29.977 20:16:42 nvmf_rdma.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:29.977 20:16:42 nvmf_rdma.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:07:29.977 20:16:42 nvmf_rdma.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:29.977 20:16:42 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:07:29.977 20:16:42 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:29.977 20:16:42 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:29.977 20:16:42 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:29.977 20:16:42 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:29.977 20:16:42 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:29.977 20:16:42 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:29.977 20:16:42 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:29.977 20:16:42 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:29.977 20:16:42 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:07:29.977 20:16:42 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:07:29.977 20:16:42 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:07:29.977 20:16:42 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:07:29.977 20:16:42 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:07:29.977 20:16:42 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:07:29.977 20:16:42 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:29.977 20:16:42 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:29.977 20:16:42 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:29.977 20:16:42 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:29.977 20:16:42 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:29.977 20:16:42 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:29.977 20:16:42 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:29.977 20:16:42 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:29.977 20:16:42 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:29.977 20:16:42 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:07:29.977 20:16:42 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:36.547 20:16:48 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:36.547 20:16:48 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:07:36.547 20:16:48 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:36.547 20:16:48 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:36.547 20:16:48 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:36.547 20:16:48 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:36.547 20:16:48 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:36.547 20:16:48 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:07:36.547 20:16:48 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:36.547 20:16:48 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@296 -- # e810=() 00:07:36.547 20:16:48 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:07:36.547 20:16:48 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@297 -- # x722=() 00:07:36.548 20:16:48 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:07:36.548 20:16:48 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@298 -- # mlx=() 00:07:36.548 20:16:48 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:07:36.548 20:16:48 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:36.548 20:16:48 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:36.548 20:16:48 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:36.548 20:16:48 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:36.548 20:16:48 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:36.548 20:16:48 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:36.548 20:16:48 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:36.548 20:16:48 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:36.548 20:16:48 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:36.548 20:16:48 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:36.548 20:16:48 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:36.548 20:16:48 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:36.548 20:16:48 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:07:36.548 20:16:48 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:07:36.548 20:16:48 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:07:36.548 20:16:48 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:07:36.548 20:16:48 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:07:36.548 20:16:48 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:36.548 20:16:48 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:36.548 20:16:48 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:07:36.548 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:07:36.548 20:16:48 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:07:36.548 20:16:48 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:07:36.548 20:16:48 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:07:36.548 20:16:48 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:07:36.548 20:16:48 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:07:36.548 20:16:48 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:07:36.548 20:16:48 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:36.548 20:16:48 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:07:36.548 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:07:36.548 20:16:48 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:07:36.548 20:16:48 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:07:36.548 20:16:48 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:07:36.548 20:16:48 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:07:36.548 20:16:48 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:07:36.548 20:16:48 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:07:36.548 20:16:48 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:36.548 20:16:48 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:07:36.548 20:16:48 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:36.548 20:16:48 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:36.548 20:16:48 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:07:36.548 20:16:48 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:36.548 20:16:48 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:36.548 20:16:48 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:07:36.548 Found net devices under 0000:da:00.0: mlx_0_0 00:07:36.548 20:16:48 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:36.548 20:16:48 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:36.548 20:16:48 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:36.548 20:16:48 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:07:36.548 20:16:48 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:36.548 20:16:48 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:36.548 20:16:48 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:07:36.548 Found net devices under 0000:da:00.1: mlx_0_1 00:07:36.548 20:16:48 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:36.548 20:16:48 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:36.548 20:16:48 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:07:36.548 20:16:48 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:36.548 20:16:48 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:07:36.548 20:16:48 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:07:36.548 20:16:48 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@420 -- # rdma_device_init 00:07:36.548 20:16:48 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:07:36.548 20:16:48 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@58 -- # uname 00:07:36.548 20:16:48 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:07:36.548 20:16:48 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@62 -- # modprobe ib_cm 00:07:36.548 20:16:48 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@63 -- # modprobe ib_core 00:07:36.548 20:16:48 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@64 -- # modprobe ib_umad 00:07:36.548 20:16:48 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:07:36.548 20:16:48 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@66 -- # modprobe iw_cm 00:07:36.548 20:16:48 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:07:36.548 20:16:48 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:07:36.548 20:16:48 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@502 -- # allocate_nic_ips 00:07:36.548 20:16:48 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:07:36.548 20:16:48 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@73 -- # get_rdma_if_list 00:07:36.548 20:16:48 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:07:36.548 20:16:48 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:07:36.548 20:16:48 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:07:36.548 20:16:48 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:07:36.548 20:16:48 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:07:36.548 20:16:48 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:07:36.548 20:16:48 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:36.548 20:16:48 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:07:36.548 20:16:48 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@104 -- # echo mlx_0_0 00:07:36.548 20:16:48 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@105 -- # continue 2 00:07:36.548 20:16:48 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:07:36.548 20:16:48 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:36.548 20:16:48 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:07:36.548 20:16:48 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:36.548 20:16:48 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:07:36.548 20:16:48 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@104 -- # echo mlx_0_1 00:07:36.548 20:16:48 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@105 -- # continue 2 00:07:36.548 20:16:48 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:07:36.548 20:16:48 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:07:36.548 20:16:48 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:07:36.548 20:16:48 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:07:36.548 20:16:48 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@113 -- # cut -d/ -f1 00:07:36.548 20:16:48 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@113 -- # awk '{print $4}' 00:07:36.548 20:16:48 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:07:36.548 20:16:48 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:07:36.548 20:16:48 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:07:36.548 258: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:07:36.548 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:07:36.548 altname enp218s0f0np0 00:07:36.548 altname ens818f0np0 00:07:36.548 inet 192.168.100.8/24 scope global mlx_0_0 00:07:36.548 valid_lft forever preferred_lft forever 00:07:36.548 20:16:48 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:07:36.548 20:16:48 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:07:36.548 20:16:48 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:07:36.548 20:16:48 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:07:36.548 20:16:48 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@113 -- # awk '{print $4}' 00:07:36.548 20:16:48 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@113 -- # cut -d/ -f1 00:07:36.548 20:16:48 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:07:36.548 20:16:48 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:07:36.548 20:16:48 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:07:36.548 259: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:07:36.548 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:07:36.548 altname enp218s0f1np1 00:07:36.548 altname ens818f1np1 00:07:36.549 inet 192.168.100.9/24 scope global mlx_0_1 00:07:36.549 valid_lft forever preferred_lft forever 00:07:36.549 20:16:48 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@422 -- # return 0 00:07:36.549 20:16:48 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:36.549 20:16:48 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:07:36.549 20:16:48 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:07:36.549 20:16:48 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:07:36.549 20:16:48 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@86 -- # get_rdma_if_list 00:07:36.549 20:16:48 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:07:36.549 20:16:48 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:07:36.549 20:16:48 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:07:36.549 20:16:48 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:07:36.549 20:16:48 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:07:36.549 20:16:48 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:07:36.549 20:16:48 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:36.549 20:16:48 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:07:36.549 20:16:48 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@104 -- # echo mlx_0_0 00:07:36.549 20:16:48 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@105 -- # continue 2 00:07:36.549 20:16:48 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:07:36.549 20:16:48 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:36.549 20:16:48 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:07:36.549 20:16:48 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:36.549 20:16:48 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:07:36.549 20:16:48 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@104 -- # echo mlx_0_1 00:07:36.549 20:16:48 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@105 -- # continue 2 00:07:36.549 20:16:48 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:07:36.549 20:16:48 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:07:36.549 20:16:48 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:07:36.549 20:16:48 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:07:36.549 20:16:48 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@113 -- # awk '{print $4}' 00:07:36.549 20:16:48 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@113 -- # cut -d/ -f1 00:07:36.549 20:16:48 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:07:36.549 20:16:48 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:07:36.549 20:16:48 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:07:36.549 20:16:48 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:07:36.549 20:16:48 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@113 -- # awk '{print $4}' 00:07:36.549 20:16:48 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@113 -- # cut -d/ -f1 00:07:36.549 20:16:48 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:07:36.549 192.168.100.9' 00:07:36.549 20:16:48 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:07:36.549 192.168.100.9' 00:07:36.549 20:16:48 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@457 -- # head -n 1 00:07:36.549 20:16:48 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:07:36.549 20:16:48 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:07:36.549 192.168.100.9' 00:07:36.549 20:16:48 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@458 -- # tail -n +2 00:07:36.549 20:16:48 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@458 -- # head -n 1 00:07:36.549 20:16:48 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:07:36.549 20:16:48 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:07:36.549 20:16:48 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:07:36.549 20:16:48 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:07:36.549 20:16:48 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:07:36.549 20:16:48 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:07:36.549 20:16:48 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:07:36.549 20:16:48 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:36.549 20:16:48 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@720 -- # xtrace_disable 00:07:36.549 20:16:48 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:36.549 20:16:48 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=2912636 00:07:36.549 20:16:48 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 2912636 00:07:36.549 20:16:48 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:36.549 20:16:48 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@827 -- # '[' -z 2912636 ']' 00:07:36.549 20:16:48 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:36.549 20:16:48 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:36.549 20:16:48 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:36.549 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:36.549 20:16:48 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:36.549 20:16:48 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:36.549 [2024-05-16 20:16:49.024272] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:07:36.549 [2024-05-16 20:16:49.024323] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:36.549 EAL: No free 2048 kB hugepages reported on node 1 00:07:36.549 [2024-05-16 20:16:49.085359] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:36.549 [2024-05-16 20:16:49.167396] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:36.549 [2024-05-16 20:16:49.167438] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:36.549 [2024-05-16 20:16:49.167446] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:36.549 [2024-05-16 20:16:49.167452] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:36.549 [2024-05-16 20:16:49.167456] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:36.549 [2024-05-16 20:16:49.167495] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:36.549 [2024-05-16 20:16:49.167592] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:36.549 [2024-05-16 20:16:49.167680] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:36.549 [2024-05-16 20:16:49.167681] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.117 20:16:49 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:37.117 20:16:49 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@860 -- # return 0 00:07:37.117 20:16:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:37.117 20:16:49 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:37.117 20:16:49 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:37.117 20:16:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:37.117 20:16:49 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:07:37.117 20:16:49 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:37.117 20:16:49 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:37.117 [2024-05-16 20:16:49.904779] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1c109b0/0x1c14ea0) succeed. 00:07:37.117 [2024-05-16 20:16:49.914946] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1c11ff0/0x1c56530) succeed. 00:07:37.117 20:16:50 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:37.117 20:16:50 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:07:37.117 20:16:50 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:37.117 20:16:50 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:07:37.117 20:16:50 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:37.117 20:16:50 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:37.117 Null1 00:07:37.117 20:16:50 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:37.117 20:16:50 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:37.117 20:16:50 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:37.117 20:16:50 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:37.117 20:16:50 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:37.117 20:16:50 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:07:37.117 20:16:50 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:37.117 20:16:50 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:37.117 20:16:50 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:37.117 20:16:50 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:07:37.117 20:16:50 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:37.117 20:16:50 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:37.117 [2024-05-16 20:16:50.072294] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:07:37.117 [2024-05-16 20:16:50.072710] rdma.c:3032:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:07:37.117 20:16:50 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:37.117 20:16:50 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:37.117 20:16:50 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:07:37.117 20:16:50 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:37.117 20:16:50 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:37.117 Null2 00:07:37.117 20:16:50 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:37.117 20:16:50 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:07:37.117 20:16:50 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:37.117 20:16:50 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:37.117 20:16:50 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:37.117 20:16:50 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:07:37.117 20:16:50 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:37.117 20:16:50 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:37.117 20:16:50 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:37.117 20:16:50 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420 00:07:37.117 20:16:50 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:37.117 20:16:50 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:37.377 20:16:50 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:37.377 20:16:50 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:37.377 20:16:50 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:07:37.377 20:16:50 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:37.377 20:16:50 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:37.377 Null3 00:07:37.377 20:16:50 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:37.377 20:16:50 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:07:37.377 20:16:50 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:37.377 20:16:50 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:37.377 20:16:50 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:37.377 20:16:50 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:07:37.377 20:16:50 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:37.377 20:16:50 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:37.377 20:16:50 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:37.377 20:16:50 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t rdma -a 192.168.100.8 -s 4420 00:07:37.377 20:16:50 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:37.377 20:16:50 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:37.377 20:16:50 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:37.377 20:16:50 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:37.377 20:16:50 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:07:37.377 20:16:50 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:37.377 20:16:50 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:37.377 Null4 00:07:37.377 20:16:50 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:37.377 20:16:50 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:07:37.377 20:16:50 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:37.377 20:16:50 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:37.378 20:16:50 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:37.378 20:16:50 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:07:37.378 20:16:50 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:37.378 20:16:50 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:37.378 20:16:50 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:37.378 20:16:50 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t rdma -a 192.168.100.8 -s 4420 00:07:37.378 20:16:50 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:37.378 20:16:50 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:37.378 20:16:50 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:37.378 20:16:50 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:07:37.378 20:16:50 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:37.378 20:16:50 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:37.378 20:16:50 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:37.378 20:16:50 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 192.168.100.8 -s 4430 00:07:37.378 20:16:50 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:37.378 20:16:50 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:37.378 20:16:50 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:37.378 20:16:50 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 4420 00:07:37.378 00:07:37.378 Discovery Log Number of Records 6, Generation counter 6 00:07:37.378 =====Discovery Log Entry 0====== 00:07:37.378 trtype: rdma 00:07:37.378 adrfam: ipv4 00:07:37.378 subtype: current discovery subsystem 00:07:37.378 treq: not required 00:07:37.378 portid: 0 00:07:37.378 trsvcid: 4420 00:07:37.378 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:07:37.378 traddr: 192.168.100.8 00:07:37.378 eflags: explicit discovery connections, duplicate discovery information 00:07:37.378 rdma_prtype: not specified 00:07:37.378 rdma_qptype: connected 00:07:37.378 rdma_cms: rdma-cm 00:07:37.378 rdma_pkey: 0x0000 00:07:37.378 =====Discovery Log Entry 1====== 00:07:37.378 trtype: rdma 00:07:37.378 adrfam: ipv4 00:07:37.378 subtype: nvme subsystem 00:07:37.378 treq: not required 00:07:37.378 portid: 0 00:07:37.378 trsvcid: 4420 00:07:37.378 subnqn: nqn.2016-06.io.spdk:cnode1 00:07:37.378 traddr: 192.168.100.8 00:07:37.378 eflags: none 00:07:37.378 rdma_prtype: not specified 00:07:37.378 rdma_qptype: connected 00:07:37.378 rdma_cms: rdma-cm 00:07:37.378 rdma_pkey: 0x0000 00:07:37.378 =====Discovery Log Entry 2====== 00:07:37.378 trtype: rdma 00:07:37.378 adrfam: ipv4 00:07:37.378 subtype: nvme subsystem 00:07:37.378 treq: not required 00:07:37.378 portid: 0 00:07:37.378 trsvcid: 4420 00:07:37.378 subnqn: nqn.2016-06.io.spdk:cnode2 00:07:37.378 traddr: 192.168.100.8 00:07:37.378 eflags: none 00:07:37.378 rdma_prtype: not specified 00:07:37.378 rdma_qptype: connected 00:07:37.378 rdma_cms: rdma-cm 00:07:37.378 rdma_pkey: 0x0000 00:07:37.378 =====Discovery Log Entry 3====== 00:07:37.378 trtype: rdma 00:07:37.378 adrfam: ipv4 00:07:37.378 subtype: nvme subsystem 00:07:37.378 treq: not required 00:07:37.378 portid: 0 00:07:37.378 trsvcid: 4420 00:07:37.378 subnqn: nqn.2016-06.io.spdk:cnode3 00:07:37.378 traddr: 192.168.100.8 00:07:37.378 eflags: none 00:07:37.378 rdma_prtype: not specified 00:07:37.378 rdma_qptype: connected 00:07:37.378 rdma_cms: rdma-cm 00:07:37.378 rdma_pkey: 0x0000 00:07:37.378 =====Discovery Log Entry 4====== 00:07:37.378 trtype: rdma 00:07:37.378 adrfam: ipv4 00:07:37.378 subtype: nvme subsystem 00:07:37.378 treq: not required 00:07:37.378 portid: 0 00:07:37.378 trsvcid: 4420 00:07:37.378 subnqn: nqn.2016-06.io.spdk:cnode4 00:07:37.378 traddr: 192.168.100.8 00:07:37.378 eflags: none 00:07:37.378 rdma_prtype: not specified 00:07:37.378 rdma_qptype: connected 00:07:37.378 rdma_cms: rdma-cm 00:07:37.378 rdma_pkey: 0x0000 00:07:37.378 =====Discovery Log Entry 5====== 00:07:37.378 trtype: rdma 00:07:37.378 adrfam: ipv4 00:07:37.378 subtype: discovery subsystem referral 00:07:37.378 treq: not required 00:07:37.378 portid: 0 00:07:37.378 trsvcid: 4430 00:07:37.378 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:07:37.378 traddr: 192.168.100.8 00:07:37.378 eflags: none 00:07:37.378 rdma_prtype: unrecognized 00:07:37.378 rdma_qptype: unrecognized 00:07:37.378 rdma_cms: unrecognized 00:07:37.378 rdma_pkey: 0x0000 00:07:37.378 20:16:50 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:07:37.378 Perform nvmf subsystem discovery via RPC 00:07:37.378 20:16:50 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:07:37.378 20:16:50 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:37.378 20:16:50 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:37.378 [ 00:07:37.378 { 00:07:37.378 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:07:37.378 "subtype": "Discovery", 00:07:37.378 "listen_addresses": [ 00:07:37.378 { 00:07:37.378 "trtype": "RDMA", 00:07:37.378 "adrfam": "IPv4", 00:07:37.378 "traddr": "192.168.100.8", 00:07:37.378 "trsvcid": "4420" 00:07:37.378 } 00:07:37.378 ], 00:07:37.378 "allow_any_host": true, 00:07:37.378 "hosts": [] 00:07:37.378 }, 00:07:37.378 { 00:07:37.378 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:07:37.378 "subtype": "NVMe", 00:07:37.378 "listen_addresses": [ 00:07:37.378 { 00:07:37.378 "trtype": "RDMA", 00:07:37.378 "adrfam": "IPv4", 00:07:37.378 "traddr": "192.168.100.8", 00:07:37.378 "trsvcid": "4420" 00:07:37.378 } 00:07:37.378 ], 00:07:37.378 "allow_any_host": true, 00:07:37.378 "hosts": [], 00:07:37.378 "serial_number": "SPDK00000000000001", 00:07:37.378 "model_number": "SPDK bdev Controller", 00:07:37.378 "max_namespaces": 32, 00:07:37.378 "min_cntlid": 1, 00:07:37.378 "max_cntlid": 65519, 00:07:37.378 "namespaces": [ 00:07:37.378 { 00:07:37.378 "nsid": 1, 00:07:37.378 "bdev_name": "Null1", 00:07:37.378 "name": "Null1", 00:07:37.378 "nguid": "8E3670AAB3AA4133A222E86382FD283C", 00:07:37.378 "uuid": "8e3670aa-b3aa-4133-a222-e86382fd283c" 00:07:37.378 } 00:07:37.378 ] 00:07:37.378 }, 00:07:37.378 { 00:07:37.378 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:07:37.378 "subtype": "NVMe", 00:07:37.378 "listen_addresses": [ 00:07:37.378 { 00:07:37.378 "trtype": "RDMA", 00:07:37.378 "adrfam": "IPv4", 00:07:37.378 "traddr": "192.168.100.8", 00:07:37.378 "trsvcid": "4420" 00:07:37.378 } 00:07:37.378 ], 00:07:37.378 "allow_any_host": true, 00:07:37.378 "hosts": [], 00:07:37.378 "serial_number": "SPDK00000000000002", 00:07:37.378 "model_number": "SPDK bdev Controller", 00:07:37.378 "max_namespaces": 32, 00:07:37.378 "min_cntlid": 1, 00:07:37.378 "max_cntlid": 65519, 00:07:37.378 "namespaces": [ 00:07:37.378 { 00:07:37.378 "nsid": 1, 00:07:37.378 "bdev_name": "Null2", 00:07:37.378 "name": "Null2", 00:07:37.378 "nguid": "8456E0F1D6534D45BCA53B829ECC0C54", 00:07:37.378 "uuid": "8456e0f1-d653-4d45-bca5-3b829ecc0c54" 00:07:37.378 } 00:07:37.378 ] 00:07:37.378 }, 00:07:37.378 { 00:07:37.378 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:07:37.378 "subtype": "NVMe", 00:07:37.378 "listen_addresses": [ 00:07:37.378 { 00:07:37.378 "trtype": "RDMA", 00:07:37.378 "adrfam": "IPv4", 00:07:37.378 "traddr": "192.168.100.8", 00:07:37.378 "trsvcid": "4420" 00:07:37.378 } 00:07:37.378 ], 00:07:37.378 "allow_any_host": true, 00:07:37.378 "hosts": [], 00:07:37.378 "serial_number": "SPDK00000000000003", 00:07:37.378 "model_number": "SPDK bdev Controller", 00:07:37.378 "max_namespaces": 32, 00:07:37.378 "min_cntlid": 1, 00:07:37.378 "max_cntlid": 65519, 00:07:37.378 "namespaces": [ 00:07:37.378 { 00:07:37.378 "nsid": 1, 00:07:37.378 "bdev_name": "Null3", 00:07:37.378 "name": "Null3", 00:07:37.378 "nguid": "26539B8C5B4F4C11AB219FA548945C39", 00:07:37.378 "uuid": "26539b8c-5b4f-4c11-ab21-9fa548945c39" 00:07:37.378 } 00:07:37.378 ] 00:07:37.378 }, 00:07:37.378 { 00:07:37.378 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:07:37.378 "subtype": "NVMe", 00:07:37.378 "listen_addresses": [ 00:07:37.378 { 00:07:37.378 "trtype": "RDMA", 00:07:37.378 "adrfam": "IPv4", 00:07:37.378 "traddr": "192.168.100.8", 00:07:37.378 "trsvcid": "4420" 00:07:37.378 } 00:07:37.378 ], 00:07:37.378 "allow_any_host": true, 00:07:37.378 "hosts": [], 00:07:37.378 "serial_number": "SPDK00000000000004", 00:07:37.378 "model_number": "SPDK bdev Controller", 00:07:37.378 "max_namespaces": 32, 00:07:37.378 "min_cntlid": 1, 00:07:37.378 "max_cntlid": 65519, 00:07:37.378 "namespaces": [ 00:07:37.378 { 00:07:37.379 "nsid": 1, 00:07:37.379 "bdev_name": "Null4", 00:07:37.379 "name": "Null4", 00:07:37.379 "nguid": "2DC7D89D173847ADB3453844C228BD70", 00:07:37.379 "uuid": "2dc7d89d-1738-47ad-b345-3844c228bd70" 00:07:37.379 } 00:07:37.379 ] 00:07:37.379 } 00:07:37.379 ] 00:07:37.379 20:16:50 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:37.379 20:16:50 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:07:37.379 20:16:50 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:37.379 20:16:50 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:37.379 20:16:50 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:37.379 20:16:50 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:37.379 20:16:50 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:37.379 20:16:50 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:07:37.379 20:16:50 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:37.379 20:16:50 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:37.379 20:16:50 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:37.379 20:16:50 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:37.379 20:16:50 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:07:37.379 20:16:50 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:37.379 20:16:50 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:37.379 20:16:50 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:37.379 20:16:50 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:07:37.379 20:16:50 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:37.379 20:16:50 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:37.379 20:16:50 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:37.379 20:16:50 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:37.379 20:16:50 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:07:37.379 20:16:50 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:37.379 20:16:50 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:37.379 20:16:50 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:37.379 20:16:50 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:07:37.379 20:16:50 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:37.379 20:16:50 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:37.379 20:16:50 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:37.379 20:16:50 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:37.379 20:16:50 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:07:37.379 20:16:50 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:37.379 20:16:50 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:37.638 20:16:50 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:37.638 20:16:50 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:07:37.638 20:16:50 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:37.638 20:16:50 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:37.638 20:16:50 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:37.638 20:16:50 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 192.168.100.8 -s 4430 00:07:37.638 20:16:50 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:37.638 20:16:50 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:37.638 20:16:50 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:37.638 20:16:50 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:07:37.638 20:16:50 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:07:37.638 20:16:50 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:37.638 20:16:50 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:37.638 20:16:50 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:37.638 20:16:50 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:07:37.638 20:16:50 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:07:37.638 20:16:50 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:07:37.638 20:16:50 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:07:37.638 20:16:50 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:37.638 20:16:50 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:07:37.638 20:16:50 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:07:37.638 20:16:50 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:07:37.638 20:16:50 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:07:37.638 20:16:50 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:37.638 20:16:50 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:07:37.638 rmmod nvme_rdma 00:07:37.638 rmmod nvme_fabrics 00:07:37.638 20:16:50 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:37.638 20:16:50 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:07:37.638 20:16:50 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:07:37.639 20:16:50 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 2912636 ']' 00:07:37.639 20:16:50 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 2912636 00:07:37.639 20:16:50 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@946 -- # '[' -z 2912636 ']' 00:07:37.639 20:16:50 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@950 -- # kill -0 2912636 00:07:37.639 20:16:50 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@951 -- # uname 00:07:37.639 20:16:50 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:37.639 20:16:50 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2912636 00:07:37.639 20:16:50 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:37.639 20:16:50 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:37.639 20:16:50 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2912636' 00:07:37.639 killing process with pid 2912636 00:07:37.639 20:16:50 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@965 -- # kill 2912636 00:07:37.639 [2024-05-16 20:16:50.522794] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:07:37.639 20:16:50 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@970 -- # wait 2912636 00:07:37.639 [2024-05-16 20:16:50.603771] rdma.c:2885:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:07:37.898 20:16:50 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:37.898 20:16:50 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:07:37.898 00:07:37.898 real 0m7.971s 00:07:37.898 user 0m8.194s 00:07:37.898 sys 0m5.000s 00:07:37.898 20:16:50 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:37.898 20:16:50 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:37.898 ************************************ 00:07:37.898 END TEST nvmf_target_discovery 00:07:37.898 ************************************ 00:07:37.898 20:16:50 nvmf_rdma -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=rdma 00:07:37.898 20:16:50 nvmf_rdma -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:07:37.898 20:16:50 nvmf_rdma -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:37.898 20:16:50 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:07:37.898 ************************************ 00:07:37.898 START TEST nvmf_referrals 00:07:37.898 ************************************ 00:07:37.898 20:16:50 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=rdma 00:07:38.157 * Looking for test storage... 00:07:38.157 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:07:38.157 20:16:50 nvmf_rdma.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:07:38.157 20:16:50 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:07:38.157 20:16:50 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:38.157 20:16:50 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:38.157 20:16:50 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:38.157 20:16:50 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:38.157 20:16:50 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:38.157 20:16:50 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:38.157 20:16:50 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:38.157 20:16:50 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:38.157 20:16:50 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:38.157 20:16:50 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:38.157 20:16:50 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:07:38.157 20:16:50 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:07:38.157 20:16:50 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:38.157 20:16:50 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:38.157 20:16:50 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:38.157 20:16:50 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:38.157 20:16:50 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:07:38.157 20:16:50 nvmf_rdma.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:38.157 20:16:50 nvmf_rdma.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:38.157 20:16:50 nvmf_rdma.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:38.157 20:16:50 nvmf_rdma.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:38.157 20:16:50 nvmf_rdma.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:38.157 20:16:50 nvmf_rdma.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:38.157 20:16:50 nvmf_rdma.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:07:38.157 20:16:50 nvmf_rdma.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:38.157 20:16:50 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:07:38.157 20:16:50 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:38.157 20:16:50 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:38.157 20:16:50 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:38.157 20:16:50 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:38.157 20:16:50 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:38.157 20:16:50 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:38.157 20:16:50 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:38.157 20:16:50 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:38.157 20:16:50 nvmf_rdma.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:07:38.157 20:16:50 nvmf_rdma.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:07:38.157 20:16:50 nvmf_rdma.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:07:38.157 20:16:50 nvmf_rdma.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:07:38.157 20:16:50 nvmf_rdma.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:07:38.157 20:16:50 nvmf_rdma.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:07:38.157 20:16:50 nvmf_rdma.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:07:38.157 20:16:50 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:07:38.157 20:16:50 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:38.157 20:16:50 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:38.157 20:16:50 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:38.157 20:16:50 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:38.157 20:16:50 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:38.157 20:16:50 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:38.157 20:16:50 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:38.157 20:16:50 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:38.157 20:16:50 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:38.157 20:16:50 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@285 -- # xtrace_disable 00:07:38.157 20:16:50 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:44.724 20:16:56 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:44.724 20:16:56 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@291 -- # pci_devs=() 00:07:44.724 20:16:56 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:44.724 20:16:56 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:44.724 20:16:56 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:44.724 20:16:56 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:44.724 20:16:56 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:44.724 20:16:56 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@295 -- # net_devs=() 00:07:44.724 20:16:56 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:44.724 20:16:56 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@296 -- # e810=() 00:07:44.724 20:16:56 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@296 -- # local -ga e810 00:07:44.724 20:16:56 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@297 -- # x722=() 00:07:44.724 20:16:56 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@297 -- # local -ga x722 00:07:44.724 20:16:56 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@298 -- # mlx=() 00:07:44.724 20:16:56 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@298 -- # local -ga mlx 00:07:44.724 20:16:56 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:44.724 20:16:56 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:44.724 20:16:56 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:44.724 20:16:56 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:44.724 20:16:56 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:44.724 20:16:56 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:44.724 20:16:56 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:44.724 20:16:56 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:44.725 20:16:56 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:44.725 20:16:56 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:44.725 20:16:56 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:44.725 20:16:56 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:44.725 20:16:56 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:07:44.725 20:16:56 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:07:44.725 20:16:56 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:07:44.725 20:16:56 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:07:44.725 20:16:56 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:07:44.725 20:16:56 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:44.725 20:16:56 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:44.725 20:16:56 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:07:44.725 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:07:44.725 20:16:56 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:07:44.725 20:16:56 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:07:44.725 20:16:56 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:07:44.725 20:16:56 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:07:44.725 20:16:56 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:07:44.725 20:16:56 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:07:44.725 20:16:56 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:44.725 20:16:56 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:07:44.725 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:07:44.725 20:16:56 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:07:44.725 20:16:56 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:07:44.725 20:16:56 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:07:44.725 20:16:56 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:07:44.725 20:16:56 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:07:44.725 20:16:56 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:07:44.725 20:16:56 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:44.725 20:16:56 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:07:44.725 20:16:56 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:44.725 20:16:56 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:44.725 20:16:56 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:07:44.725 20:16:56 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:44.725 20:16:56 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:44.725 20:16:56 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:07:44.725 Found net devices under 0000:da:00.0: mlx_0_0 00:07:44.725 20:16:56 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:44.725 20:16:56 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:44.725 20:16:56 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:44.725 20:16:56 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:07:44.725 20:16:56 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:44.725 20:16:56 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:44.725 20:16:56 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:07:44.725 Found net devices under 0000:da:00.1: mlx_0_1 00:07:44.725 20:16:56 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:44.725 20:16:56 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:44.725 20:16:56 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@414 -- # is_hw=yes 00:07:44.725 20:16:56 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:44.725 20:16:56 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:07:44.725 20:16:56 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:07:44.725 20:16:56 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@420 -- # rdma_device_init 00:07:44.725 20:16:56 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:07:44.725 20:16:56 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@58 -- # uname 00:07:44.725 20:16:56 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:07:44.725 20:16:56 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@62 -- # modprobe ib_cm 00:07:44.725 20:16:56 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@63 -- # modprobe ib_core 00:07:44.725 20:16:56 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@64 -- # modprobe ib_umad 00:07:44.725 20:16:56 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:07:44.725 20:16:56 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@66 -- # modprobe iw_cm 00:07:44.725 20:16:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:07:44.725 20:16:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:07:44.725 20:16:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@502 -- # allocate_nic_ips 00:07:44.725 20:16:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:07:44.725 20:16:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@73 -- # get_rdma_if_list 00:07:44.725 20:16:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:07:44.725 20:16:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:07:44.725 20:16:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:07:44.725 20:16:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:07:44.725 20:16:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:07:44.725 20:16:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:07:44.725 20:16:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:44.725 20:16:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:07:44.725 20:16:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@104 -- # echo mlx_0_0 00:07:44.725 20:16:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@105 -- # continue 2 00:07:44.725 20:16:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:07:44.725 20:16:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:44.725 20:16:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:07:44.725 20:16:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:44.725 20:16:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:07:44.725 20:16:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@104 -- # echo mlx_0_1 00:07:44.725 20:16:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@105 -- # continue 2 00:07:44.725 20:16:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:07:44.725 20:16:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:07:44.725 20:16:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:07:44.725 20:16:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:07:44.725 20:16:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@113 -- # awk '{print $4}' 00:07:44.725 20:16:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@113 -- # cut -d/ -f1 00:07:44.725 20:16:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:07:44.725 20:16:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:07:44.725 20:16:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:07:44.725 258: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:07:44.725 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:07:44.725 altname enp218s0f0np0 00:07:44.725 altname ens818f0np0 00:07:44.725 inet 192.168.100.8/24 scope global mlx_0_0 00:07:44.725 valid_lft forever preferred_lft forever 00:07:44.725 20:16:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:07:44.725 20:16:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:07:44.725 20:16:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:07:44.725 20:16:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:07:44.725 20:16:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@113 -- # awk '{print $4}' 00:07:44.725 20:16:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@113 -- # cut -d/ -f1 00:07:44.725 20:16:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:07:44.725 20:16:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:07:44.725 20:16:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:07:44.725 259: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:07:44.725 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:07:44.725 altname enp218s0f1np1 00:07:44.725 altname ens818f1np1 00:07:44.725 inet 192.168.100.9/24 scope global mlx_0_1 00:07:44.725 valid_lft forever preferred_lft forever 00:07:44.725 20:16:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@422 -- # return 0 00:07:44.725 20:16:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:44.725 20:16:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:07:44.725 20:16:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:07:44.725 20:16:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:07:44.725 20:16:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@86 -- # get_rdma_if_list 00:07:44.725 20:16:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:07:44.725 20:16:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:07:44.725 20:16:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:07:44.725 20:16:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:07:44.726 20:16:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:07:44.726 20:16:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:07:44.726 20:16:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:44.726 20:16:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:07:44.726 20:16:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@104 -- # echo mlx_0_0 00:07:44.726 20:16:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@105 -- # continue 2 00:07:44.726 20:16:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:07:44.726 20:16:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:44.726 20:16:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:07:44.726 20:16:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:44.726 20:16:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:07:44.726 20:16:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@104 -- # echo mlx_0_1 00:07:44.726 20:16:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@105 -- # continue 2 00:07:44.726 20:16:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:07:44.726 20:16:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:07:44.726 20:16:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:07:44.726 20:16:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:07:44.726 20:16:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@113 -- # cut -d/ -f1 00:07:44.726 20:16:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@113 -- # awk '{print $4}' 00:07:44.726 20:16:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:07:44.726 20:16:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:07:44.726 20:16:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:07:44.726 20:16:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:07:44.726 20:16:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@113 -- # awk '{print $4}' 00:07:44.726 20:16:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@113 -- # cut -d/ -f1 00:07:44.726 20:16:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:07:44.726 192.168.100.9' 00:07:44.726 20:16:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:07:44.726 192.168.100.9' 00:07:44.726 20:16:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@457 -- # head -n 1 00:07:44.726 20:16:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:07:44.726 20:16:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:07:44.726 192.168.100.9' 00:07:44.726 20:16:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@458 -- # tail -n +2 00:07:44.726 20:16:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@458 -- # head -n 1 00:07:44.726 20:16:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:07:44.726 20:16:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:07:44.726 20:16:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:07:44.726 20:16:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:07:44.726 20:16:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:07:44.726 20:16:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:07:44.726 20:16:57 nvmf_rdma.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:07:44.726 20:16:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:44.726 20:16:57 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@720 -- # xtrace_disable 00:07:44.726 20:16:57 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:44.726 20:16:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=2916457 00:07:44.726 20:16:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 2916457 00:07:44.726 20:16:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:44.726 20:16:57 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@827 -- # '[' -z 2916457 ']' 00:07:44.726 20:16:57 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:44.726 20:16:57 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:44.726 20:16:57 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:44.726 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:44.726 20:16:57 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:44.726 20:16:57 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:44.726 [2024-05-16 20:16:57.202590] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:07:44.726 [2024-05-16 20:16:57.202632] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:44.726 EAL: No free 2048 kB hugepages reported on node 1 00:07:44.726 [2024-05-16 20:16:57.261275] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:44.726 [2024-05-16 20:16:57.341302] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:44.726 [2024-05-16 20:16:57.341338] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:44.726 [2024-05-16 20:16:57.341345] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:44.726 [2024-05-16 20:16:57.341351] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:44.726 [2024-05-16 20:16:57.341356] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:44.726 [2024-05-16 20:16:57.341405] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:44.726 [2024-05-16 20:16:57.341504] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:44.726 [2024-05-16 20:16:57.341527] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:44.726 [2024-05-16 20:16:57.341529] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.294 20:16:58 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:45.294 20:16:58 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@860 -- # return 0 00:07:45.294 20:16:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:45.294 20:16:58 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:45.294 20:16:58 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:45.294 20:16:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:45.294 20:16:58 nvmf_rdma.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:07:45.294 20:16:58 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:45.294 20:16:58 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:45.294 [2024-05-16 20:16:58.073630] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1d209b0/0x1d24ea0) succeed. 00:07:45.294 [2024-05-16 20:16:58.083857] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1d21ff0/0x1d66530) succeed. 00:07:45.294 20:16:58 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:45.294 20:16:58 nvmf_rdma.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t rdma -a 192.168.100.8 -s 8009 discovery 00:07:45.294 20:16:58 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:45.294 20:16:58 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:45.294 [2024-05-16 20:16:58.206928] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:07:45.294 [2024-05-16 20:16:58.207306] rdma.c:3032:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 8009 *** 00:07:45.294 20:16:58 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:45.294 20:16:58 nvmf_rdma.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.2 -s 4430 00:07:45.294 20:16:58 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:45.294 20:16:58 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:45.294 20:16:58 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:45.294 20:16:58 nvmf_rdma.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.3 -s 4430 00:07:45.294 20:16:58 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:45.294 20:16:58 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:45.294 20:16:58 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:45.294 20:16:58 nvmf_rdma.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.4 -s 4430 00:07:45.294 20:16:58 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:45.294 20:16:58 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:45.294 20:16:58 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:45.294 20:16:58 nvmf_rdma.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:45.294 20:16:58 nvmf_rdma.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:07:45.294 20:16:58 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:45.294 20:16:58 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:45.294 20:16:58 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:45.294 20:16:58 nvmf_rdma.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:07:45.294 20:16:58 nvmf_rdma.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:07:45.294 20:16:58 nvmf_rdma.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:07:45.294 20:16:58 nvmf_rdma.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:07:45.294 20:16:58 nvmf_rdma.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:45.294 20:16:58 nvmf_rdma.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:07:45.294 20:16:58 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:45.294 20:16:58 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:45.552 20:16:58 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:45.552 20:16:58 nvmf_rdma.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:07:45.552 20:16:58 nvmf_rdma.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:07:45.552 20:16:58 nvmf_rdma.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:07:45.552 20:16:58 nvmf_rdma.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:45.552 20:16:58 nvmf_rdma.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:45.552 20:16:58 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 8009 -o json 00:07:45.552 20:16:58 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:45.552 20:16:58 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:07:45.552 20:16:58 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:07:45.552 20:16:58 nvmf_rdma.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:07:45.552 20:16:58 nvmf_rdma.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.2 -s 4430 00:07:45.552 20:16:58 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:45.552 20:16:58 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:45.552 20:16:58 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:45.552 20:16:58 nvmf_rdma.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.3 -s 4430 00:07:45.552 20:16:58 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:45.552 20:16:58 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:45.552 20:16:58 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:45.552 20:16:58 nvmf_rdma.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.4 -s 4430 00:07:45.552 20:16:58 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:45.552 20:16:58 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:45.552 20:16:58 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:45.552 20:16:58 nvmf_rdma.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:45.552 20:16:58 nvmf_rdma.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:07:45.552 20:16:58 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:45.552 20:16:58 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:45.552 20:16:58 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:45.552 20:16:58 nvmf_rdma.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:07:45.552 20:16:58 nvmf_rdma.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:07:45.552 20:16:58 nvmf_rdma.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:45.552 20:16:58 nvmf_rdma.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:45.552 20:16:58 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 8009 -o json 00:07:45.552 20:16:58 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:45.552 20:16:58 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:07:45.809 20:16:58 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:07:45.809 20:16:58 nvmf_rdma.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:07:45.809 20:16:58 nvmf_rdma.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.2 -s 4430 -n discovery 00:07:45.809 20:16:58 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:45.809 20:16:58 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:45.809 20:16:58 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:45.809 20:16:58 nvmf_rdma.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:07:45.809 20:16:58 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:45.809 20:16:58 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:45.809 20:16:58 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:45.809 20:16:58 nvmf_rdma.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:07:45.809 20:16:58 nvmf_rdma.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:07:45.809 20:16:58 nvmf_rdma.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:45.809 20:16:58 nvmf_rdma.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:07:45.809 20:16:58 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:45.809 20:16:58 nvmf_rdma.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:07:45.809 20:16:58 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:45.809 20:16:58 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:45.809 20:16:58 nvmf_rdma.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:07:45.809 20:16:58 nvmf_rdma.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:07:45.809 20:16:58 nvmf_rdma.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:07:45.809 20:16:58 nvmf_rdma.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:45.809 20:16:58 nvmf_rdma.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:45.809 20:16:58 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 8009 -o json 00:07:45.809 20:16:58 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:45.809 20:16:58 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:07:45.809 20:16:58 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:07:45.809 20:16:58 nvmf_rdma.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:07:45.809 20:16:58 nvmf_rdma.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:07:45.809 20:16:58 nvmf_rdma.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:07:45.809 20:16:58 nvmf_rdma.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:07:45.809 20:16:58 nvmf_rdma.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 8009 -o json 00:07:45.809 20:16:58 nvmf_rdma.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:07:46.066 20:16:58 nvmf_rdma.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:07:46.066 20:16:58 nvmf_rdma.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:07:46.066 20:16:58 nvmf_rdma.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:07:46.066 20:16:58 nvmf_rdma.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:07:46.066 20:16:58 nvmf_rdma.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 8009 -o json 00:07:46.066 20:16:58 nvmf_rdma.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:07:46.066 20:16:58 nvmf_rdma.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:07:46.066 20:16:58 nvmf_rdma.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:07:46.066 20:16:58 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:46.066 20:16:58 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:46.066 20:16:58 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:46.066 20:16:58 nvmf_rdma.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:07:46.066 20:16:58 nvmf_rdma.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:07:46.066 20:16:58 nvmf_rdma.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:46.066 20:16:58 nvmf_rdma.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:07:46.066 20:16:58 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:46.066 20:16:58 nvmf_rdma.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:07:46.066 20:16:58 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:46.066 20:16:58 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:46.066 20:16:58 nvmf_rdma.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:07:46.066 20:16:58 nvmf_rdma.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:07:46.066 20:16:58 nvmf_rdma.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:07:46.066 20:16:58 nvmf_rdma.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:46.066 20:16:58 nvmf_rdma.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:46.066 20:16:58 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 8009 -o json 00:07:46.066 20:16:58 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:46.066 20:16:58 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:07:46.323 20:16:59 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:07:46.323 20:16:59 nvmf_rdma.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:07:46.323 20:16:59 nvmf_rdma.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:07:46.323 20:16:59 nvmf_rdma.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:07:46.323 20:16:59 nvmf_rdma.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:07:46.323 20:16:59 nvmf_rdma.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 8009 -o json 00:07:46.323 20:16:59 nvmf_rdma.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:07:46.323 20:16:59 nvmf_rdma.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:07:46.323 20:16:59 nvmf_rdma.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:07:46.323 20:16:59 nvmf_rdma.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:07:46.323 20:16:59 nvmf_rdma.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:07:46.323 20:16:59 nvmf_rdma.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:07:46.323 20:16:59 nvmf_rdma.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 8009 -o json 00:07:46.323 20:16:59 nvmf_rdma.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:07:46.323 20:16:59 nvmf_rdma.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:07:46.323 20:16:59 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:46.323 20:16:59 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:46.323 20:16:59 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:46.323 20:16:59 nvmf_rdma.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:46.323 20:16:59 nvmf_rdma.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:07:46.323 20:16:59 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:46.323 20:16:59 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:46.323 20:16:59 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:46.583 20:16:59 nvmf_rdma.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:07:46.583 20:16:59 nvmf_rdma.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:07:46.583 20:16:59 nvmf_rdma.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:46.583 20:16:59 nvmf_rdma.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:46.583 20:16:59 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:46.583 20:16:59 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 8009 -o json 00:07:46.583 20:16:59 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:07:46.583 20:16:59 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:07:46.583 20:16:59 nvmf_rdma.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:07:46.583 20:16:59 nvmf_rdma.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:07:46.583 20:16:59 nvmf_rdma.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:07:46.583 20:16:59 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:46.583 20:16:59 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:07:46.583 20:16:59 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:07:46.583 20:16:59 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:07:46.583 20:16:59 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:07:46.583 20:16:59 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:46.583 20:16:59 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:07:46.583 rmmod nvme_rdma 00:07:46.583 rmmod nvme_fabrics 00:07:46.583 20:16:59 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:46.583 20:16:59 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:07:46.583 20:16:59 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:07:46.583 20:16:59 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 2916457 ']' 00:07:46.583 20:16:59 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 2916457 00:07:46.583 20:16:59 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@946 -- # '[' -z 2916457 ']' 00:07:46.583 20:16:59 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@950 -- # kill -0 2916457 00:07:46.583 20:16:59 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@951 -- # uname 00:07:46.583 20:16:59 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:46.583 20:16:59 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2916457 00:07:46.583 20:16:59 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:46.583 20:16:59 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:46.583 20:16:59 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2916457' 00:07:46.583 killing process with pid 2916457 00:07:46.583 20:16:59 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@965 -- # kill 2916457 00:07:46.583 [2024-05-16 20:16:59.500321] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:07:46.583 20:16:59 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@970 -- # wait 2916457 00:07:46.842 [2024-05-16 20:16:59.578009] rdma.c:2885:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:07:46.842 20:16:59 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:46.842 20:16:59 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:07:46.842 00:07:46.842 real 0m8.903s 00:07:46.842 user 0m12.021s 00:07:46.842 sys 0m5.412s 00:07:46.842 20:16:59 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:46.842 20:16:59 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:46.842 ************************************ 00:07:46.842 END TEST nvmf_referrals 00:07:46.842 ************************************ 00:07:46.842 20:16:59 nvmf_rdma -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=rdma 00:07:46.842 20:16:59 nvmf_rdma -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:07:46.842 20:16:59 nvmf_rdma -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:46.842 20:16:59 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:07:46.842 ************************************ 00:07:46.842 START TEST nvmf_connect_disconnect 00:07:46.842 ************************************ 00:07:46.842 20:16:59 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=rdma 00:07:47.100 * Looking for test storage... 00:07:47.100 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:07:47.100 20:16:59 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:07:47.100 20:16:59 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:07:47.100 20:16:59 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:47.100 20:16:59 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:47.100 20:16:59 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:47.100 20:16:59 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:47.100 20:16:59 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:47.100 20:16:59 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:47.100 20:16:59 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:47.100 20:16:59 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:47.100 20:16:59 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:47.100 20:16:59 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:47.100 20:16:59 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:07:47.100 20:16:59 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:07:47.100 20:16:59 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:47.100 20:16:59 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:47.100 20:16:59 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:47.100 20:16:59 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:47.100 20:16:59 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:07:47.100 20:16:59 nvmf_rdma.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:47.100 20:16:59 nvmf_rdma.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:47.100 20:16:59 nvmf_rdma.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:47.100 20:16:59 nvmf_rdma.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:47.100 20:16:59 nvmf_rdma.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:47.100 20:16:59 nvmf_rdma.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:47.100 20:16:59 nvmf_rdma.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:07:47.100 20:16:59 nvmf_rdma.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:47.100 20:16:59 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:07:47.100 20:16:59 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:47.100 20:16:59 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:47.100 20:16:59 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:47.100 20:16:59 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:47.100 20:16:59 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:47.100 20:16:59 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:47.100 20:16:59 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:47.100 20:16:59 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:47.100 20:16:59 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:47.100 20:16:59 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:47.100 20:16:59 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:07:47.100 20:16:59 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:07:47.100 20:16:59 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:47.100 20:16:59 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:47.100 20:16:59 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:47.100 20:16:59 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:47.100 20:16:59 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:47.100 20:16:59 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:47.100 20:16:59 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:47.100 20:16:59 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:47.100 20:16:59 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:47.100 20:16:59 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:07:47.100 20:16:59 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:52.366 20:17:05 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:52.366 20:17:05 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:07:52.366 20:17:05 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:52.366 20:17:05 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:52.366 20:17:05 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:52.366 20:17:05 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:52.367 20:17:05 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:52.367 20:17:05 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:07:52.367 20:17:05 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:52.367 20:17:05 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # e810=() 00:07:52.367 20:17:05 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:07:52.367 20:17:05 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # x722=() 00:07:52.367 20:17:05 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:07:52.367 20:17:05 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:07:52.367 20:17:05 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:07:52.367 20:17:05 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:52.367 20:17:05 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:52.367 20:17:05 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:52.367 20:17:05 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:52.367 20:17:05 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:52.367 20:17:05 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:52.367 20:17:05 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:52.367 20:17:05 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:52.367 20:17:05 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:52.367 20:17:05 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:52.367 20:17:05 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:52.367 20:17:05 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:52.367 20:17:05 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:07:52.367 20:17:05 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:07:52.367 20:17:05 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:07:52.367 20:17:05 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:07:52.367 20:17:05 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:07:52.367 20:17:05 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:52.367 20:17:05 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:52.367 20:17:05 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:07:52.367 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:07:52.367 20:17:05 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:07:52.367 20:17:05 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:07:52.367 20:17:05 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:07:52.367 20:17:05 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:07:52.367 20:17:05 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:07:52.367 20:17:05 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:07:52.367 20:17:05 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:52.367 20:17:05 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:07:52.367 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:07:52.367 20:17:05 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:07:52.367 20:17:05 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:07:52.367 20:17:05 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:07:52.367 20:17:05 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:07:52.367 20:17:05 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:07:52.367 20:17:05 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:07:52.367 20:17:05 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:52.367 20:17:05 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:07:52.367 20:17:05 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:52.367 20:17:05 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:52.367 20:17:05 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:07:52.367 20:17:05 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:52.367 20:17:05 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:52.367 20:17:05 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:07:52.367 Found net devices under 0000:da:00.0: mlx_0_0 00:07:52.367 20:17:05 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:52.367 20:17:05 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:52.367 20:17:05 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:52.367 20:17:05 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:07:52.367 20:17:05 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:52.367 20:17:05 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:52.367 20:17:05 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:07:52.367 Found net devices under 0000:da:00.1: mlx_0_1 00:07:52.367 20:17:05 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:52.367 20:17:05 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:52.367 20:17:05 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:07:52.367 20:17:05 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:52.367 20:17:05 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:07:52.367 20:17:05 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:07:52.367 20:17:05 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@420 -- # rdma_device_init 00:07:52.367 20:17:05 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:07:52.367 20:17:05 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@58 -- # uname 00:07:52.367 20:17:05 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:07:52.367 20:17:05 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@62 -- # modprobe ib_cm 00:07:52.367 20:17:05 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@63 -- # modprobe ib_core 00:07:52.367 20:17:05 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@64 -- # modprobe ib_umad 00:07:52.367 20:17:05 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:07:52.367 20:17:05 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@66 -- # modprobe iw_cm 00:07:52.367 20:17:05 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:07:52.367 20:17:05 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:07:52.367 20:17:05 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@502 -- # allocate_nic_ips 00:07:52.367 20:17:05 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:07:52.367 20:17:05 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@73 -- # get_rdma_if_list 00:07:52.367 20:17:05 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:07:52.367 20:17:05 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:07:52.367 20:17:05 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:07:52.367 20:17:05 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:07:52.367 20:17:05 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:07:52.367 20:17:05 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:07:52.367 20:17:05 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:52.367 20:17:05 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:07:52.367 20:17:05 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@104 -- # echo mlx_0_0 00:07:52.367 20:17:05 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@105 -- # continue 2 00:07:52.367 20:17:05 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:07:52.367 20:17:05 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:52.367 20:17:05 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:07:52.367 20:17:05 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:52.367 20:17:05 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:07:52.367 20:17:05 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@104 -- # echo mlx_0_1 00:07:52.367 20:17:05 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@105 -- # continue 2 00:07:52.367 20:17:05 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:07:52.367 20:17:05 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:07:52.367 20:17:05 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:07:52.367 20:17:05 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # cut -d/ -f1 00:07:52.367 20:17:05 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:07:52.367 20:17:05 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # awk '{print $4}' 00:07:52.367 20:17:05 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:07:52.367 20:17:05 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:07:52.367 20:17:05 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:07:52.367 258: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:07:52.367 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:07:52.367 altname enp218s0f0np0 00:07:52.367 altname ens818f0np0 00:07:52.367 inet 192.168.100.8/24 scope global mlx_0_0 00:07:52.367 valid_lft forever preferred_lft forever 00:07:52.367 20:17:05 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:07:52.367 20:17:05 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:07:52.367 20:17:05 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:07:52.367 20:17:05 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # awk '{print $4}' 00:07:52.367 20:17:05 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:07:52.367 20:17:05 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # cut -d/ -f1 00:07:52.367 20:17:05 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:07:52.367 20:17:05 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:07:52.367 20:17:05 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:07:52.367 259: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:07:52.367 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:07:52.367 altname enp218s0f1np1 00:07:52.367 altname ens818f1np1 00:07:52.368 inet 192.168.100.9/24 scope global mlx_0_1 00:07:52.368 valid_lft forever preferred_lft forever 00:07:52.368 20:17:05 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # return 0 00:07:52.368 20:17:05 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:52.368 20:17:05 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:07:52.368 20:17:05 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:07:52.368 20:17:05 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:07:52.368 20:17:05 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@86 -- # get_rdma_if_list 00:07:52.368 20:17:05 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:07:52.368 20:17:05 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:07:52.368 20:17:05 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:07:52.368 20:17:05 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:07:52.368 20:17:05 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:07:52.368 20:17:05 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:07:52.368 20:17:05 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:52.368 20:17:05 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:07:52.368 20:17:05 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@104 -- # echo mlx_0_0 00:07:52.368 20:17:05 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@105 -- # continue 2 00:07:52.368 20:17:05 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:07:52.368 20:17:05 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:52.368 20:17:05 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:07:52.368 20:17:05 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:52.368 20:17:05 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:07:52.368 20:17:05 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@104 -- # echo mlx_0_1 00:07:52.368 20:17:05 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@105 -- # continue 2 00:07:52.368 20:17:05 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:07:52.368 20:17:05 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:07:52.368 20:17:05 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:07:52.368 20:17:05 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:07:52.368 20:17:05 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # awk '{print $4}' 00:07:52.368 20:17:05 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # cut -d/ -f1 00:07:52.368 20:17:05 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:07:52.368 20:17:05 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:07:52.368 20:17:05 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:07:52.368 20:17:05 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:07:52.368 20:17:05 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # awk '{print $4}' 00:07:52.368 20:17:05 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # cut -d/ -f1 00:07:52.368 20:17:05 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:07:52.368 192.168.100.9' 00:07:52.368 20:17:05 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:07:52.368 192.168.100.9' 00:07:52.368 20:17:05 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@457 -- # head -n 1 00:07:52.368 20:17:05 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:07:52.368 20:17:05 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@458 -- # tail -n +2 00:07:52.368 20:17:05 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@458 -- # head -n 1 00:07:52.368 20:17:05 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:07:52.368 192.168.100.9' 00:07:52.368 20:17:05 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:07:52.368 20:17:05 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:07:52.368 20:17:05 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:07:52.368 20:17:05 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:07:52.368 20:17:05 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:07:52.368 20:17:05 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:07:52.368 20:17:05 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:07:52.368 20:17:05 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:52.368 20:17:05 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@720 -- # xtrace_disable 00:07:52.368 20:17:05 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:52.368 20:17:05 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=2920316 00:07:52.368 20:17:05 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 2920316 00:07:52.368 20:17:05 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:52.368 20:17:05 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@827 -- # '[' -z 2920316 ']' 00:07:52.368 20:17:05 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:52.368 20:17:05 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:52.368 20:17:05 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:52.368 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:52.368 20:17:05 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:52.368 20:17:05 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:52.368 [2024-05-16 20:17:05.263683] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:07:52.368 [2024-05-16 20:17:05.263725] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:52.368 EAL: No free 2048 kB hugepages reported on node 1 00:07:52.368 [2024-05-16 20:17:05.319482] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:52.627 [2024-05-16 20:17:05.401293] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:52.627 [2024-05-16 20:17:05.401327] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:52.627 [2024-05-16 20:17:05.401334] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:52.627 [2024-05-16 20:17:05.401341] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:52.627 [2024-05-16 20:17:05.401346] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:52.627 [2024-05-16 20:17:05.401387] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:52.627 [2024-05-16 20:17:05.401438] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:52.627 [2024-05-16 20:17:05.401440] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:52.627 [2024-05-16 20:17:05.401413] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:53.193 20:17:06 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:53.193 20:17:06 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@860 -- # return 0 00:07:53.193 20:17:06 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:53.193 20:17:06 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:53.193 20:17:06 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:53.193 20:17:06 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:53.193 20:17:06 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -c 0 00:07:53.193 20:17:06 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:53.193 20:17:06 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:53.193 [2024-05-16 20:17:06.117378] rdma.c:2726:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:07:53.193 [2024-05-16 20:17:06.139139] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x7759b0/0x779ea0) succeed. 00:07:53.193 [2024-05-16 20:17:06.149521] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x776ff0/0x7bb530) succeed. 00:07:53.452 20:17:06 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:53.452 20:17:06 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:07:53.452 20:17:06 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:53.452 20:17:06 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:53.452 20:17:06 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:53.452 20:17:06 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:07:53.452 20:17:06 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:53.452 20:17:06 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:53.452 20:17:06 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:53.452 20:17:06 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:53.452 20:17:06 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:53.452 20:17:06 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:53.452 20:17:06 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:53.452 20:17:06 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:53.452 20:17:06 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:07:53.452 20:17:06 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:53.452 20:17:06 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:53.452 [2024-05-16 20:17:06.289344] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:07:53.452 [2024-05-16 20:17:06.289722] rdma.c:3032:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:07:53.452 20:17:06 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:53.452 20:17:06 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:07:53.452 20:17:06 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:07:53.452 20:17:06 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:07:57.688 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:01.877 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:05.165 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:09.356 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:13.547 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:13.547 20:17:25 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:08:13.547 20:17:25 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:08:13.547 20:17:25 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:13.547 20:17:25 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:08:13.547 20:17:25 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:08:13.547 20:17:25 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:08:13.547 20:17:25 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:08:13.547 20:17:25 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:13.547 20:17:25 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:08:13.547 rmmod nvme_rdma 00:08:13.547 rmmod nvme_fabrics 00:08:13.547 20:17:26 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:13.547 20:17:26 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:08:13.547 20:17:26 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:08:13.547 20:17:26 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 2920316 ']' 00:08:13.547 20:17:26 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 2920316 00:08:13.547 20:17:26 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@946 -- # '[' -z 2920316 ']' 00:08:13.547 20:17:26 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@950 -- # kill -0 2920316 00:08:13.547 20:17:26 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@951 -- # uname 00:08:13.547 20:17:26 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:08:13.547 20:17:26 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2920316 00:08:13.547 20:17:26 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:08:13.547 20:17:26 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:08:13.547 20:17:26 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2920316' 00:08:13.547 killing process with pid 2920316 00:08:13.547 20:17:26 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@965 -- # kill 2920316 00:08:13.547 [2024-05-16 20:17:26.060773] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:08:13.547 20:17:26 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@970 -- # wait 2920316 00:08:13.547 [2024-05-16 20:17:26.110975] rdma.c:2885:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:08:13.547 20:17:26 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:13.547 20:17:26 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:08:13.547 00:08:13.547 real 0m26.496s 00:08:13.547 user 1m24.400s 00:08:13.547 sys 0m4.926s 00:08:13.547 20:17:26 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:13.547 20:17:26 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:13.547 ************************************ 00:08:13.547 END TEST nvmf_connect_disconnect 00:08:13.547 ************************************ 00:08:13.547 20:17:26 nvmf_rdma -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=rdma 00:08:13.547 20:17:26 nvmf_rdma -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:08:13.547 20:17:26 nvmf_rdma -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:13.547 20:17:26 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:08:13.547 ************************************ 00:08:13.547 START TEST nvmf_multitarget 00:08:13.547 ************************************ 00:08:13.547 20:17:26 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=rdma 00:08:13.547 * Looking for test storage... 00:08:13.547 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:08:13.547 20:17:26 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:08:13.547 20:17:26 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:08:13.547 20:17:26 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:13.547 20:17:26 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:13.547 20:17:26 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:13.547 20:17:26 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:13.547 20:17:26 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:13.547 20:17:26 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:13.547 20:17:26 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:13.547 20:17:26 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:13.547 20:17:26 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:13.547 20:17:26 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:13.547 20:17:26 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:08:13.547 20:17:26 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:08:13.547 20:17:26 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:13.547 20:17:26 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:13.547 20:17:26 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:13.547 20:17:26 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:13.547 20:17:26 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:08:13.547 20:17:26 nvmf_rdma.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:13.547 20:17:26 nvmf_rdma.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:13.547 20:17:26 nvmf_rdma.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:13.547 20:17:26 nvmf_rdma.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:13.547 20:17:26 nvmf_rdma.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:13.547 20:17:26 nvmf_rdma.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:13.547 20:17:26 nvmf_rdma.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:08:13.547 20:17:26 nvmf_rdma.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:13.547 20:17:26 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:08:13.547 20:17:26 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:13.547 20:17:26 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:13.547 20:17:26 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:13.547 20:17:26 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:13.547 20:17:26 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:13.547 20:17:26 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:13.547 20:17:26 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:13.547 20:17:26 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:13.547 20:17:26 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:08:13.547 20:17:26 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:08:13.547 20:17:26 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:08:13.547 20:17:26 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:13.547 20:17:26 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:13.547 20:17:26 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:13.548 20:17:26 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:13.548 20:17:26 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:13.548 20:17:26 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:13.548 20:17:26 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:13.548 20:17:26 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:13.548 20:17:26 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:13.548 20:17:26 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@285 -- # xtrace_disable 00:08:13.548 20:17:26 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:08:20.116 20:17:32 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:20.116 20:17:32 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@291 -- # pci_devs=() 00:08:20.116 20:17:32 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:20.116 20:17:32 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:20.116 20:17:32 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:20.116 20:17:32 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:20.116 20:17:32 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:20.116 20:17:32 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@295 -- # net_devs=() 00:08:20.116 20:17:32 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:20.116 20:17:32 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@296 -- # e810=() 00:08:20.116 20:17:32 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@296 -- # local -ga e810 00:08:20.116 20:17:32 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@297 -- # x722=() 00:08:20.116 20:17:32 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@297 -- # local -ga x722 00:08:20.116 20:17:32 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@298 -- # mlx=() 00:08:20.116 20:17:32 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@298 -- # local -ga mlx 00:08:20.116 20:17:32 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:20.116 20:17:32 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:20.116 20:17:32 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:20.116 20:17:32 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:20.116 20:17:32 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:20.116 20:17:32 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:20.116 20:17:32 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:20.116 20:17:32 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:20.116 20:17:32 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:20.116 20:17:32 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:20.116 20:17:32 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:20.116 20:17:32 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:20.116 20:17:32 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:08:20.116 20:17:32 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:08:20.116 20:17:32 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:08:20.116 20:17:32 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:08:20.116 20:17:32 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:08:20.116 20:17:32 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:20.116 20:17:32 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:20.116 20:17:32 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:08:20.116 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:08:20.116 20:17:32 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:08:20.116 20:17:32 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:08:20.116 20:17:32 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:20.116 20:17:32 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:20.116 20:17:32 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:08:20.116 20:17:32 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:08:20.116 20:17:32 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:20.116 20:17:32 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:08:20.116 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:08:20.116 20:17:32 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:08:20.116 20:17:32 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:08:20.116 20:17:32 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:20.116 20:17:32 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:20.116 20:17:32 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:08:20.116 20:17:32 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:08:20.116 20:17:32 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:20.116 20:17:32 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:08:20.116 20:17:32 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:20.116 20:17:32 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:20.116 20:17:32 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:08:20.116 20:17:32 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:20.116 20:17:32 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:20.116 20:17:32 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:08:20.116 Found net devices under 0000:da:00.0: mlx_0_0 00:08:20.116 20:17:32 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:20.116 20:17:32 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:20.116 20:17:32 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:20.116 20:17:32 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:08:20.116 20:17:32 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:20.116 20:17:32 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:20.116 20:17:32 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:08:20.116 Found net devices under 0000:da:00.1: mlx_0_1 00:08:20.116 20:17:32 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:20.116 20:17:32 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:20.116 20:17:32 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@414 -- # is_hw=yes 00:08:20.116 20:17:32 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:20.116 20:17:32 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:08:20.116 20:17:32 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:08:20.116 20:17:32 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@420 -- # rdma_device_init 00:08:20.116 20:17:32 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:08:20.116 20:17:32 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@58 -- # uname 00:08:20.116 20:17:32 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:08:20.116 20:17:32 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@62 -- # modprobe ib_cm 00:08:20.116 20:17:32 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@63 -- # modprobe ib_core 00:08:20.116 20:17:32 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@64 -- # modprobe ib_umad 00:08:20.116 20:17:32 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:08:20.116 20:17:32 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@66 -- # modprobe iw_cm 00:08:20.116 20:17:32 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:08:20.116 20:17:32 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:08:20.116 20:17:32 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@502 -- # allocate_nic_ips 00:08:20.116 20:17:32 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:08:20.116 20:17:32 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@73 -- # get_rdma_if_list 00:08:20.116 20:17:32 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:20.116 20:17:32 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:08:20.116 20:17:32 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:08:20.116 20:17:32 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:20.116 20:17:32 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:08:20.116 20:17:32 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:20.116 20:17:32 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:20.116 20:17:32 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:20.116 20:17:32 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@104 -- # echo mlx_0_0 00:08:20.116 20:17:32 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@105 -- # continue 2 00:08:20.116 20:17:32 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:20.116 20:17:32 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:20.116 20:17:32 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:20.116 20:17:32 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:20.116 20:17:32 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:20.116 20:17:32 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@104 -- # echo mlx_0_1 00:08:20.116 20:17:32 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@105 -- # continue 2 00:08:20.116 20:17:32 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:08:20.116 20:17:32 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:08:20.116 20:17:32 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:08:20.116 20:17:32 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:08:20.116 20:17:32 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:20.116 20:17:32 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:20.116 20:17:32 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:08:20.116 20:17:32 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:08:20.116 20:17:32 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:08:20.116 258: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:20.116 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:08:20.116 altname enp218s0f0np0 00:08:20.116 altname ens818f0np0 00:08:20.116 inet 192.168.100.8/24 scope global mlx_0_0 00:08:20.116 valid_lft forever preferred_lft forever 00:08:20.116 20:17:32 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:08:20.116 20:17:32 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:08:20.116 20:17:32 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:08:20.116 20:17:32 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:08:20.116 20:17:32 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:20.116 20:17:32 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:20.116 20:17:32 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:08:20.116 20:17:32 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:08:20.116 20:17:32 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:08:20.116 259: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:20.117 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:08:20.117 altname enp218s0f1np1 00:08:20.117 altname ens818f1np1 00:08:20.117 inet 192.168.100.9/24 scope global mlx_0_1 00:08:20.117 valid_lft forever preferred_lft forever 00:08:20.117 20:17:32 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@422 -- # return 0 00:08:20.117 20:17:32 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:20.117 20:17:32 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:08:20.117 20:17:32 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:08:20.117 20:17:32 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:08:20.117 20:17:32 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@86 -- # get_rdma_if_list 00:08:20.117 20:17:32 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:20.117 20:17:32 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:08:20.117 20:17:32 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:08:20.117 20:17:32 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:20.117 20:17:32 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:08:20.117 20:17:32 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:20.117 20:17:32 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:20.117 20:17:32 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:20.117 20:17:32 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@104 -- # echo mlx_0_0 00:08:20.117 20:17:32 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@105 -- # continue 2 00:08:20.117 20:17:32 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:20.117 20:17:32 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:20.117 20:17:32 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:20.117 20:17:32 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:20.117 20:17:32 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:20.117 20:17:32 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@104 -- # echo mlx_0_1 00:08:20.117 20:17:32 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@105 -- # continue 2 00:08:20.117 20:17:32 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:08:20.117 20:17:32 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:08:20.117 20:17:32 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:08:20.117 20:17:32 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:08:20.117 20:17:32 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:20.117 20:17:32 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:20.117 20:17:32 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:08:20.117 20:17:32 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:08:20.117 20:17:32 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:08:20.117 20:17:32 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:08:20.117 20:17:32 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:20.117 20:17:32 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:20.117 20:17:32 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:08:20.117 192.168.100.9' 00:08:20.117 20:17:32 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:08:20.117 192.168.100.9' 00:08:20.117 20:17:32 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@457 -- # head -n 1 00:08:20.117 20:17:32 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:08:20.117 20:17:32 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:08:20.117 192.168.100.9' 00:08:20.117 20:17:32 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@458 -- # head -n 1 00:08:20.117 20:17:32 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@458 -- # tail -n +2 00:08:20.117 20:17:32 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:08:20.117 20:17:32 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:08:20.117 20:17:32 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:08:20.117 20:17:32 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:08:20.117 20:17:32 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:08:20.117 20:17:32 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:08:20.117 20:17:32 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:08:20.117 20:17:32 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:20.117 20:17:32 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@720 -- # xtrace_disable 00:08:20.117 20:17:32 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:08:20.117 20:17:32 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=2927454 00:08:20.117 20:17:32 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 2927454 00:08:20.117 20:17:32 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:20.117 20:17:32 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@827 -- # '[' -z 2927454 ']' 00:08:20.117 20:17:32 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:20.117 20:17:32 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:20.117 20:17:32 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:20.117 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:20.117 20:17:32 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:20.117 20:17:32 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:08:20.117 [2024-05-16 20:17:32.592551] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:08:20.117 [2024-05-16 20:17:32.592607] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:20.117 EAL: No free 2048 kB hugepages reported on node 1 00:08:20.117 [2024-05-16 20:17:32.656076] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:20.117 [2024-05-16 20:17:32.730516] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:20.117 [2024-05-16 20:17:32.730555] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:20.117 [2024-05-16 20:17:32.730561] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:20.117 [2024-05-16 20:17:32.730567] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:20.117 [2024-05-16 20:17:32.730572] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:20.117 [2024-05-16 20:17:32.730627] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:20.117 [2024-05-16 20:17:32.730716] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:20.117 [2024-05-16 20:17:32.730785] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:20.117 [2024-05-16 20:17:32.730786] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:20.685 20:17:33 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:20.685 20:17:33 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@860 -- # return 0 00:08:20.685 20:17:33 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:20.685 20:17:33 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:20.685 20:17:33 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:08:20.685 20:17:33 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:20.685 20:17:33 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:08:20.685 20:17:33 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:08:20.685 20:17:33 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:08:20.685 20:17:33 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:08:20.685 20:17:33 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:08:20.685 "nvmf_tgt_1" 00:08:20.685 20:17:33 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:08:20.943 "nvmf_tgt_2" 00:08:20.944 20:17:33 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:08:20.944 20:17:33 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:08:20.944 20:17:33 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:08:20.944 20:17:33 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:08:20.944 true 00:08:21.202 20:17:33 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:08:21.202 true 00:08:21.202 20:17:34 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:08:21.202 20:17:34 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:08:21.202 20:17:34 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:08:21.202 20:17:34 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:08:21.202 20:17:34 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:08:21.202 20:17:34 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:21.202 20:17:34 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:08:21.202 20:17:34 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:08:21.202 20:17:34 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:08:21.202 20:17:34 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:08:21.202 20:17:34 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:21.202 20:17:34 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:08:21.202 rmmod nvme_rdma 00:08:21.202 rmmod nvme_fabrics 00:08:21.461 20:17:34 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:21.461 20:17:34 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:08:21.461 20:17:34 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:08:21.461 20:17:34 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 2927454 ']' 00:08:21.461 20:17:34 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 2927454 00:08:21.461 20:17:34 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@946 -- # '[' -z 2927454 ']' 00:08:21.461 20:17:34 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@950 -- # kill -0 2927454 00:08:21.461 20:17:34 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@951 -- # uname 00:08:21.461 20:17:34 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:08:21.461 20:17:34 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2927454 00:08:21.461 20:17:34 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:08:21.461 20:17:34 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:08:21.461 20:17:34 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2927454' 00:08:21.461 killing process with pid 2927454 00:08:21.461 20:17:34 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@965 -- # kill 2927454 00:08:21.461 20:17:34 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@970 -- # wait 2927454 00:08:21.461 20:17:34 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:21.461 20:17:34 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:08:21.461 00:08:21.461 real 0m8.044s 00:08:21.461 user 0m9.188s 00:08:21.461 sys 0m5.030s 00:08:21.461 20:17:34 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:21.461 20:17:34 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:08:21.461 ************************************ 00:08:21.461 END TEST nvmf_multitarget 00:08:21.461 ************************************ 00:08:21.721 20:17:34 nvmf_rdma -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=rdma 00:08:21.721 20:17:34 nvmf_rdma -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:08:21.721 20:17:34 nvmf_rdma -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:21.721 20:17:34 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:08:21.721 ************************************ 00:08:21.721 START TEST nvmf_rpc 00:08:21.721 ************************************ 00:08:21.721 20:17:34 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=rdma 00:08:21.721 * Looking for test storage... 00:08:21.721 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:08:21.721 20:17:34 nvmf_rdma.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:08:21.721 20:17:34 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:08:21.721 20:17:34 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:21.721 20:17:34 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:21.721 20:17:34 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:21.721 20:17:34 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:21.721 20:17:34 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:21.721 20:17:34 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:21.721 20:17:34 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:21.721 20:17:34 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:21.721 20:17:34 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:21.721 20:17:34 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:21.721 20:17:34 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:08:21.721 20:17:34 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:08:21.721 20:17:34 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:21.721 20:17:34 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:21.721 20:17:34 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:21.721 20:17:34 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:21.721 20:17:34 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:08:21.721 20:17:34 nvmf_rdma.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:21.721 20:17:34 nvmf_rdma.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:21.721 20:17:34 nvmf_rdma.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:21.721 20:17:34 nvmf_rdma.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:21.721 20:17:34 nvmf_rdma.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:21.721 20:17:34 nvmf_rdma.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:21.721 20:17:34 nvmf_rdma.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:08:21.721 20:17:34 nvmf_rdma.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:21.721 20:17:34 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:08:21.721 20:17:34 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:21.721 20:17:34 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:21.721 20:17:34 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:21.721 20:17:34 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:21.721 20:17:34 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:21.721 20:17:34 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:21.721 20:17:34 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:21.721 20:17:34 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:21.721 20:17:34 nvmf_rdma.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:08:21.721 20:17:34 nvmf_rdma.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:08:21.721 20:17:34 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:08:21.721 20:17:34 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:21.721 20:17:34 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:21.721 20:17:34 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:21.721 20:17:34 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:21.721 20:17:34 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:21.721 20:17:34 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:21.721 20:17:34 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:21.721 20:17:34 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:21.721 20:17:34 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:21.721 20:17:34 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@285 -- # xtrace_disable 00:08:21.721 20:17:34 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:28.292 20:17:40 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:28.292 20:17:40 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@291 -- # pci_devs=() 00:08:28.292 20:17:40 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:28.292 20:17:40 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:28.292 20:17:40 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:28.292 20:17:40 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:28.292 20:17:40 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:28.292 20:17:40 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@295 -- # net_devs=() 00:08:28.292 20:17:40 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:28.292 20:17:40 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@296 -- # e810=() 00:08:28.292 20:17:40 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@296 -- # local -ga e810 00:08:28.292 20:17:40 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@297 -- # x722=() 00:08:28.292 20:17:40 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@297 -- # local -ga x722 00:08:28.292 20:17:40 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@298 -- # mlx=() 00:08:28.292 20:17:40 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@298 -- # local -ga mlx 00:08:28.292 20:17:40 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:28.292 20:17:40 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:28.292 20:17:40 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:28.292 20:17:40 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:28.292 20:17:40 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:28.292 20:17:40 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:28.292 20:17:40 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:28.292 20:17:40 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:28.292 20:17:40 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:28.292 20:17:40 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:28.292 20:17:40 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:28.292 20:17:40 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:28.292 20:17:40 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:08:28.292 20:17:40 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:08:28.292 20:17:40 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:08:28.292 20:17:40 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:08:28.292 20:17:40 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:08:28.292 20:17:40 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:28.292 20:17:40 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:28.292 20:17:40 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:08:28.292 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:08:28.292 20:17:40 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:08:28.292 20:17:40 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:08:28.292 20:17:40 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:28.292 20:17:40 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:28.292 20:17:40 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:08:28.292 20:17:40 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:08:28.292 20:17:40 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:28.292 20:17:40 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:08:28.292 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:08:28.292 20:17:40 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:08:28.292 20:17:40 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:08:28.292 20:17:40 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:28.292 20:17:40 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:28.292 20:17:40 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:08:28.292 20:17:40 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:08:28.292 20:17:40 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:28.292 20:17:40 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:08:28.292 20:17:40 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:28.293 20:17:40 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:28.293 20:17:40 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:08:28.293 20:17:40 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:28.293 20:17:40 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:28.293 20:17:40 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:08:28.293 Found net devices under 0000:da:00.0: mlx_0_0 00:08:28.293 20:17:40 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:28.293 20:17:40 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:28.293 20:17:40 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:28.293 20:17:40 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:08:28.293 20:17:40 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:28.293 20:17:40 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:28.293 20:17:40 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:08:28.293 Found net devices under 0000:da:00.1: mlx_0_1 00:08:28.293 20:17:40 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:28.293 20:17:40 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:28.293 20:17:40 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@414 -- # is_hw=yes 00:08:28.293 20:17:40 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:28.293 20:17:40 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:08:28.293 20:17:40 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:08:28.293 20:17:40 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@420 -- # rdma_device_init 00:08:28.293 20:17:40 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:08:28.293 20:17:40 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@58 -- # uname 00:08:28.293 20:17:40 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:08:28.293 20:17:40 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@62 -- # modprobe ib_cm 00:08:28.293 20:17:40 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@63 -- # modprobe ib_core 00:08:28.293 20:17:40 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@64 -- # modprobe ib_umad 00:08:28.293 20:17:40 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:08:28.293 20:17:40 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@66 -- # modprobe iw_cm 00:08:28.293 20:17:40 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:08:28.293 20:17:40 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:08:28.293 20:17:40 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@502 -- # allocate_nic_ips 00:08:28.293 20:17:40 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:08:28.293 20:17:40 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@73 -- # get_rdma_if_list 00:08:28.293 20:17:40 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:28.293 20:17:40 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:08:28.293 20:17:40 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:08:28.293 20:17:40 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:28.293 20:17:40 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:08:28.293 20:17:40 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:28.293 20:17:40 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:28.293 20:17:40 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:28.293 20:17:40 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@104 -- # echo mlx_0_0 00:08:28.293 20:17:40 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@105 -- # continue 2 00:08:28.293 20:17:40 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:28.293 20:17:40 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:28.293 20:17:40 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:28.293 20:17:40 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:28.293 20:17:40 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:28.293 20:17:40 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@104 -- # echo mlx_0_1 00:08:28.293 20:17:40 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@105 -- # continue 2 00:08:28.293 20:17:40 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:08:28.293 20:17:40 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:08:28.293 20:17:40 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:08:28.293 20:17:40 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:08:28.293 20:17:40 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:28.293 20:17:40 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:28.293 20:17:40 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:08:28.293 20:17:40 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:08:28.293 20:17:40 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:08:28.293 258: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:28.293 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:08:28.293 altname enp218s0f0np0 00:08:28.293 altname ens818f0np0 00:08:28.293 inet 192.168.100.8/24 scope global mlx_0_0 00:08:28.293 valid_lft forever preferred_lft forever 00:08:28.293 20:17:40 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:08:28.293 20:17:40 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:08:28.293 20:17:40 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:08:28.293 20:17:40 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:08:28.293 20:17:40 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:28.293 20:17:40 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:28.293 20:17:40 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:08:28.293 20:17:40 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:08:28.293 20:17:40 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:08:28.293 259: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:28.293 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:08:28.293 altname enp218s0f1np1 00:08:28.293 altname ens818f1np1 00:08:28.293 inet 192.168.100.9/24 scope global mlx_0_1 00:08:28.293 valid_lft forever preferred_lft forever 00:08:28.293 20:17:40 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@422 -- # return 0 00:08:28.293 20:17:40 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:28.293 20:17:40 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:08:28.293 20:17:40 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:08:28.293 20:17:40 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:08:28.293 20:17:40 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@86 -- # get_rdma_if_list 00:08:28.293 20:17:40 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:28.293 20:17:40 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:08:28.293 20:17:40 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:08:28.293 20:17:40 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:28.293 20:17:40 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:08:28.293 20:17:40 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:28.293 20:17:40 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:28.293 20:17:40 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:28.293 20:17:40 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@104 -- # echo mlx_0_0 00:08:28.293 20:17:40 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@105 -- # continue 2 00:08:28.293 20:17:40 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:28.293 20:17:40 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:28.293 20:17:40 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:28.293 20:17:40 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:28.293 20:17:40 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:28.293 20:17:40 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@104 -- # echo mlx_0_1 00:08:28.293 20:17:40 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@105 -- # continue 2 00:08:28.293 20:17:40 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:08:28.293 20:17:40 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:08:28.293 20:17:40 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:08:28.293 20:17:40 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:08:28.293 20:17:40 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:28.293 20:17:40 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:28.293 20:17:40 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:08:28.293 20:17:40 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:08:28.293 20:17:40 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:08:28.293 20:17:40 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:08:28.293 20:17:40 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:28.293 20:17:40 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:28.293 20:17:40 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:08:28.293 192.168.100.9' 00:08:28.293 20:17:40 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@457 -- # head -n 1 00:08:28.293 20:17:40 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:08:28.293 192.168.100.9' 00:08:28.293 20:17:40 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:08:28.293 20:17:40 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@458 -- # tail -n +2 00:08:28.293 20:17:40 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:08:28.293 192.168.100.9' 00:08:28.294 20:17:40 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@458 -- # head -n 1 00:08:28.294 20:17:40 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:08:28.294 20:17:40 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:08:28.294 20:17:40 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:08:28.294 20:17:40 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:08:28.294 20:17:40 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:08:28.294 20:17:40 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:08:28.294 20:17:40 nvmf_rdma.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:08:28.294 20:17:40 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:28.294 20:17:40 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@720 -- # xtrace_disable 00:08:28.294 20:17:40 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:28.294 20:17:40 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=2931140 00:08:28.294 20:17:40 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 2931140 00:08:28.294 20:17:40 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:28.294 20:17:40 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@827 -- # '[' -z 2931140 ']' 00:08:28.294 20:17:40 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:28.294 20:17:40 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:28.294 20:17:40 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:28.294 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:28.294 20:17:40 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:28.294 20:17:40 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:28.294 [2024-05-16 20:17:40.523902] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:08:28.294 [2024-05-16 20:17:40.523950] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:28.294 EAL: No free 2048 kB hugepages reported on node 1 00:08:28.294 [2024-05-16 20:17:40.586470] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:28.294 [2024-05-16 20:17:40.660617] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:28.294 [2024-05-16 20:17:40.660659] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:28.294 [2024-05-16 20:17:40.660666] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:28.294 [2024-05-16 20:17:40.660672] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:28.294 [2024-05-16 20:17:40.660676] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:28.294 [2024-05-16 20:17:40.660722] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:28.294 [2024-05-16 20:17:40.660820] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:28.294 [2024-05-16 20:17:40.660911] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:28.294 [2024-05-16 20:17:40.660912] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:28.553 20:17:41 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:28.553 20:17:41 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@860 -- # return 0 00:08:28.553 20:17:41 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:28.553 20:17:41 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:28.553 20:17:41 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:28.553 20:17:41 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:28.553 20:17:41 nvmf_rdma.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:08:28.553 20:17:41 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:28.553 20:17:41 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:28.553 20:17:41 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:28.553 20:17:41 nvmf_rdma.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:08:28.553 "tick_rate": 2100000000, 00:08:28.553 "poll_groups": [ 00:08:28.553 { 00:08:28.553 "name": "nvmf_tgt_poll_group_000", 00:08:28.553 "admin_qpairs": 0, 00:08:28.553 "io_qpairs": 0, 00:08:28.553 "current_admin_qpairs": 0, 00:08:28.553 "current_io_qpairs": 0, 00:08:28.553 "pending_bdev_io": 0, 00:08:28.553 "completed_nvme_io": 0, 00:08:28.553 "transports": [] 00:08:28.553 }, 00:08:28.553 { 00:08:28.553 "name": "nvmf_tgt_poll_group_001", 00:08:28.553 "admin_qpairs": 0, 00:08:28.553 "io_qpairs": 0, 00:08:28.553 "current_admin_qpairs": 0, 00:08:28.553 "current_io_qpairs": 0, 00:08:28.553 "pending_bdev_io": 0, 00:08:28.553 "completed_nvme_io": 0, 00:08:28.553 "transports": [] 00:08:28.553 }, 00:08:28.553 { 00:08:28.553 "name": "nvmf_tgt_poll_group_002", 00:08:28.553 "admin_qpairs": 0, 00:08:28.553 "io_qpairs": 0, 00:08:28.553 "current_admin_qpairs": 0, 00:08:28.553 "current_io_qpairs": 0, 00:08:28.553 "pending_bdev_io": 0, 00:08:28.553 "completed_nvme_io": 0, 00:08:28.553 "transports": [] 00:08:28.553 }, 00:08:28.553 { 00:08:28.553 "name": "nvmf_tgt_poll_group_003", 00:08:28.553 "admin_qpairs": 0, 00:08:28.553 "io_qpairs": 0, 00:08:28.553 "current_admin_qpairs": 0, 00:08:28.553 "current_io_qpairs": 0, 00:08:28.553 "pending_bdev_io": 0, 00:08:28.553 "completed_nvme_io": 0, 00:08:28.553 "transports": [] 00:08:28.553 } 00:08:28.553 ] 00:08:28.553 }' 00:08:28.553 20:17:41 nvmf_rdma.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:08:28.553 20:17:41 nvmf_rdma.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:08:28.553 20:17:41 nvmf_rdma.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:08:28.553 20:17:41 nvmf_rdma.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:08:28.553 20:17:41 nvmf_rdma.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:08:28.553 20:17:41 nvmf_rdma.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:08:28.553 20:17:41 nvmf_rdma.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:08:28.553 20:17:41 nvmf_rdma.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:08:28.553 20:17:41 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:28.553 20:17:41 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:28.553 [2024-05-16 20:17:41.501547] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x22269c0/0x222aeb0) succeed. 00:08:28.553 [2024-05-16 20:17:41.511950] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x2228000/0x226c540) succeed. 00:08:28.812 20:17:41 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:28.812 20:17:41 nvmf_rdma.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:08:28.812 20:17:41 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:28.812 20:17:41 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:28.812 20:17:41 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:28.812 20:17:41 nvmf_rdma.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:08:28.812 "tick_rate": 2100000000, 00:08:28.812 "poll_groups": [ 00:08:28.812 { 00:08:28.812 "name": "nvmf_tgt_poll_group_000", 00:08:28.812 "admin_qpairs": 0, 00:08:28.812 "io_qpairs": 0, 00:08:28.812 "current_admin_qpairs": 0, 00:08:28.812 "current_io_qpairs": 0, 00:08:28.813 "pending_bdev_io": 0, 00:08:28.813 "completed_nvme_io": 0, 00:08:28.813 "transports": [ 00:08:28.813 { 00:08:28.813 "trtype": "RDMA", 00:08:28.813 "pending_data_buffer": 0, 00:08:28.813 "devices": [ 00:08:28.813 { 00:08:28.813 "name": "mlx5_0", 00:08:28.813 "polls": 14678, 00:08:28.813 "idle_polls": 14678, 00:08:28.813 "completions": 0, 00:08:28.813 "requests": 0, 00:08:28.813 "request_latency": 0, 00:08:28.813 "pending_free_request": 0, 00:08:28.813 "pending_rdma_read": 0, 00:08:28.813 "pending_rdma_write": 0, 00:08:28.813 "pending_rdma_send": 0, 00:08:28.813 "total_send_wrs": 0, 00:08:28.813 "send_doorbell_updates": 0, 00:08:28.813 "total_recv_wrs": 4096, 00:08:28.813 "recv_doorbell_updates": 1 00:08:28.813 }, 00:08:28.813 { 00:08:28.813 "name": "mlx5_1", 00:08:28.813 "polls": 14678, 00:08:28.813 "idle_polls": 14678, 00:08:28.813 "completions": 0, 00:08:28.813 "requests": 0, 00:08:28.813 "request_latency": 0, 00:08:28.813 "pending_free_request": 0, 00:08:28.813 "pending_rdma_read": 0, 00:08:28.813 "pending_rdma_write": 0, 00:08:28.813 "pending_rdma_send": 0, 00:08:28.813 "total_send_wrs": 0, 00:08:28.813 "send_doorbell_updates": 0, 00:08:28.813 "total_recv_wrs": 4096, 00:08:28.813 "recv_doorbell_updates": 1 00:08:28.813 } 00:08:28.813 ] 00:08:28.813 } 00:08:28.813 ] 00:08:28.813 }, 00:08:28.813 { 00:08:28.813 "name": "nvmf_tgt_poll_group_001", 00:08:28.813 "admin_qpairs": 0, 00:08:28.813 "io_qpairs": 0, 00:08:28.813 "current_admin_qpairs": 0, 00:08:28.813 "current_io_qpairs": 0, 00:08:28.813 "pending_bdev_io": 0, 00:08:28.813 "completed_nvme_io": 0, 00:08:28.813 "transports": [ 00:08:28.813 { 00:08:28.813 "trtype": "RDMA", 00:08:28.813 "pending_data_buffer": 0, 00:08:28.813 "devices": [ 00:08:28.813 { 00:08:28.813 "name": "mlx5_0", 00:08:28.813 "polls": 9736, 00:08:28.813 "idle_polls": 9736, 00:08:28.813 "completions": 0, 00:08:28.813 "requests": 0, 00:08:28.813 "request_latency": 0, 00:08:28.813 "pending_free_request": 0, 00:08:28.813 "pending_rdma_read": 0, 00:08:28.813 "pending_rdma_write": 0, 00:08:28.813 "pending_rdma_send": 0, 00:08:28.813 "total_send_wrs": 0, 00:08:28.813 "send_doorbell_updates": 0, 00:08:28.813 "total_recv_wrs": 4096, 00:08:28.813 "recv_doorbell_updates": 1 00:08:28.813 }, 00:08:28.813 { 00:08:28.813 "name": "mlx5_1", 00:08:28.813 "polls": 9736, 00:08:28.813 "idle_polls": 9736, 00:08:28.813 "completions": 0, 00:08:28.813 "requests": 0, 00:08:28.813 "request_latency": 0, 00:08:28.813 "pending_free_request": 0, 00:08:28.813 "pending_rdma_read": 0, 00:08:28.813 "pending_rdma_write": 0, 00:08:28.813 "pending_rdma_send": 0, 00:08:28.813 "total_send_wrs": 0, 00:08:28.813 "send_doorbell_updates": 0, 00:08:28.813 "total_recv_wrs": 4096, 00:08:28.813 "recv_doorbell_updates": 1 00:08:28.813 } 00:08:28.813 ] 00:08:28.813 } 00:08:28.813 ] 00:08:28.813 }, 00:08:28.813 { 00:08:28.813 "name": "nvmf_tgt_poll_group_002", 00:08:28.813 "admin_qpairs": 0, 00:08:28.813 "io_qpairs": 0, 00:08:28.813 "current_admin_qpairs": 0, 00:08:28.813 "current_io_qpairs": 0, 00:08:28.813 "pending_bdev_io": 0, 00:08:28.813 "completed_nvme_io": 0, 00:08:28.813 "transports": [ 00:08:28.813 { 00:08:28.813 "trtype": "RDMA", 00:08:28.813 "pending_data_buffer": 0, 00:08:28.813 "devices": [ 00:08:28.813 { 00:08:28.813 "name": "mlx5_0", 00:08:28.813 "polls": 5282, 00:08:28.813 "idle_polls": 5282, 00:08:28.813 "completions": 0, 00:08:28.813 "requests": 0, 00:08:28.813 "request_latency": 0, 00:08:28.813 "pending_free_request": 0, 00:08:28.813 "pending_rdma_read": 0, 00:08:28.813 "pending_rdma_write": 0, 00:08:28.813 "pending_rdma_send": 0, 00:08:28.813 "total_send_wrs": 0, 00:08:28.813 "send_doorbell_updates": 0, 00:08:28.813 "total_recv_wrs": 4096, 00:08:28.813 "recv_doorbell_updates": 1 00:08:28.813 }, 00:08:28.813 { 00:08:28.813 "name": "mlx5_1", 00:08:28.813 "polls": 5282, 00:08:28.813 "idle_polls": 5282, 00:08:28.813 "completions": 0, 00:08:28.813 "requests": 0, 00:08:28.813 "request_latency": 0, 00:08:28.813 "pending_free_request": 0, 00:08:28.813 "pending_rdma_read": 0, 00:08:28.813 "pending_rdma_write": 0, 00:08:28.813 "pending_rdma_send": 0, 00:08:28.813 "total_send_wrs": 0, 00:08:28.813 "send_doorbell_updates": 0, 00:08:28.813 "total_recv_wrs": 4096, 00:08:28.813 "recv_doorbell_updates": 1 00:08:28.813 } 00:08:28.813 ] 00:08:28.813 } 00:08:28.813 ] 00:08:28.813 }, 00:08:28.813 { 00:08:28.813 "name": "nvmf_tgt_poll_group_003", 00:08:28.813 "admin_qpairs": 0, 00:08:28.813 "io_qpairs": 0, 00:08:28.813 "current_admin_qpairs": 0, 00:08:28.813 "current_io_qpairs": 0, 00:08:28.813 "pending_bdev_io": 0, 00:08:28.813 "completed_nvme_io": 0, 00:08:28.813 "transports": [ 00:08:28.813 { 00:08:28.813 "trtype": "RDMA", 00:08:28.813 "pending_data_buffer": 0, 00:08:28.813 "devices": [ 00:08:28.813 { 00:08:28.813 "name": "mlx5_0", 00:08:28.813 "polls": 845, 00:08:28.813 "idle_polls": 845, 00:08:28.813 "completions": 0, 00:08:28.813 "requests": 0, 00:08:28.813 "request_latency": 0, 00:08:28.813 "pending_free_request": 0, 00:08:28.813 "pending_rdma_read": 0, 00:08:28.813 "pending_rdma_write": 0, 00:08:28.813 "pending_rdma_send": 0, 00:08:28.813 "total_send_wrs": 0, 00:08:28.813 "send_doorbell_updates": 0, 00:08:28.813 "total_recv_wrs": 4096, 00:08:28.813 "recv_doorbell_updates": 1 00:08:28.813 }, 00:08:28.813 { 00:08:28.813 "name": "mlx5_1", 00:08:28.813 "polls": 845, 00:08:28.813 "idle_polls": 845, 00:08:28.813 "completions": 0, 00:08:28.813 "requests": 0, 00:08:28.813 "request_latency": 0, 00:08:28.813 "pending_free_request": 0, 00:08:28.813 "pending_rdma_read": 0, 00:08:28.813 "pending_rdma_write": 0, 00:08:28.813 "pending_rdma_send": 0, 00:08:28.813 "total_send_wrs": 0, 00:08:28.813 "send_doorbell_updates": 0, 00:08:28.813 "total_recv_wrs": 4096, 00:08:28.813 "recv_doorbell_updates": 1 00:08:28.813 } 00:08:28.813 ] 00:08:28.813 } 00:08:28.813 ] 00:08:28.813 } 00:08:28.813 ] 00:08:28.813 }' 00:08:28.813 20:17:41 nvmf_rdma.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:08:28.813 20:17:41 nvmf_rdma.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:08:28.813 20:17:41 nvmf_rdma.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:08:28.813 20:17:41 nvmf_rdma.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:08:28.813 20:17:41 nvmf_rdma.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:08:28.813 20:17:41 nvmf_rdma.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:08:28.813 20:17:41 nvmf_rdma.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:08:28.813 20:17:41 nvmf_rdma.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:08:28.813 20:17:41 nvmf_rdma.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:08:28.813 20:17:41 nvmf_rdma.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:08:28.813 20:17:41 nvmf_rdma.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == rdma ']' 00:08:28.813 20:17:41 nvmf_rdma.nvmf_rpc -- target/rpc.sh@40 -- # jcount '.poll_groups[0].transports[].trtype' 00:08:28.813 20:17:41 nvmf_rdma.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[0].transports[].trtype' 00:08:28.813 20:17:41 nvmf_rdma.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[0].transports[].trtype' 00:08:28.813 20:17:41 nvmf_rdma.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:08:28.813 20:17:41 nvmf_rdma.nvmf_rpc -- target/rpc.sh@40 -- # (( 1 == 1 )) 00:08:28.813 20:17:41 nvmf_rdma.nvmf_rpc -- target/rpc.sh@41 -- # jq -r '.poll_groups[0].transports[0].trtype' 00:08:29.073 20:17:41 nvmf_rdma.nvmf_rpc -- target/rpc.sh@41 -- # transport_type=RDMA 00:08:29.073 20:17:41 nvmf_rdma.nvmf_rpc -- target/rpc.sh@42 -- # [[ rdma == \r\d\m\a ]] 00:08:29.073 20:17:41 nvmf_rdma.nvmf_rpc -- target/rpc.sh@43 -- # jcount '.poll_groups[0].transports[0].devices[].name' 00:08:29.073 20:17:41 nvmf_rdma.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[0].transports[0].devices[].name' 00:08:29.073 20:17:41 nvmf_rdma.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[0].transports[0].devices[].name' 00:08:29.073 20:17:41 nvmf_rdma.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:08:29.073 20:17:41 nvmf_rdma.nvmf_rpc -- target/rpc.sh@43 -- # (( 2 > 0 )) 00:08:29.073 20:17:41 nvmf_rdma.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:08:29.073 20:17:41 nvmf_rdma.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:08:29.073 20:17:41 nvmf_rdma.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:08:29.073 20:17:41 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:29.073 20:17:41 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:29.073 Malloc1 00:08:29.073 20:17:41 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:29.073 20:17:41 nvmf_rdma.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:29.073 20:17:41 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:29.073 20:17:41 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:29.073 20:17:41 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:29.073 20:17:41 nvmf_rdma.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:29.073 20:17:41 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:29.073 20:17:41 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:29.073 20:17:41 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:29.073 20:17:41 nvmf_rdma.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:08:29.073 20:17:41 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:29.073 20:17:41 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:29.073 20:17:41 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:29.073 20:17:41 nvmf_rdma.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:08:29.073 20:17:41 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:29.073 20:17:41 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:29.073 [2024-05-16 20:17:41.925739] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:08:29.073 [2024-05-16 20:17:41.926162] rdma.c:3032:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:08:29.073 20:17:41 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:29.073 20:17:41 nvmf_rdma.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -a 192.168.100.8 -s 4420 00:08:29.073 20:17:41 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:08:29.073 20:17:41 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -a 192.168.100.8 -s 4420 00:08:29.073 20:17:41 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:08:29.073 20:17:41 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:29.073 20:17:41 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:08:29.073 20:17:41 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:29.073 20:17:41 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:08:29.073 20:17:41 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:29.073 20:17:41 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:08:29.073 20:17:41 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:08:29.073 20:17:41 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -a 192.168.100.8 -s 4420 00:08:29.073 [2024-05-16 20:17:41.972158] ctrlr.c: 816:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562' 00:08:29.073 Failed to write to /dev/nvme-fabrics: Input/output error 00:08:29.073 could not add new controller: failed to write to nvme-fabrics device 00:08:29.073 20:17:42 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:08:29.073 20:17:42 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:29.073 20:17:42 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:29.073 20:17:42 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:29.073 20:17:42 nvmf_rdma.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:08:29.073 20:17:42 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:29.073 20:17:42 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:29.073 20:17:42 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:29.073 20:17:42 nvmf_rdma.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:08:30.007 20:17:42 nvmf_rdma.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:08:30.007 20:17:42 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:08:30.007 20:17:42 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:08:30.007 20:17:42 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:08:30.007 20:17:42 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:08:32.538 20:17:44 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:08:32.538 20:17:44 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:08:32.538 20:17:44 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:08:32.538 20:17:44 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:08:32.538 20:17:44 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:08:32.538 20:17:44 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:08:32.538 20:17:44 nvmf_rdma.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:33.105 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:33.105 20:17:45 nvmf_rdma.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:33.105 20:17:45 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:08:33.105 20:17:45 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:08:33.105 20:17:45 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:33.105 20:17:45 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:08:33.105 20:17:45 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:33.105 20:17:45 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:08:33.105 20:17:45 nvmf_rdma.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:08:33.105 20:17:45 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:33.105 20:17:45 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:33.105 20:17:45 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:33.105 20:17:45 nvmf_rdma.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:08:33.105 20:17:45 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:08:33.105 20:17:45 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:08:33.105 20:17:45 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:08:33.105 20:17:45 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:33.105 20:17:45 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:08:33.105 20:17:45 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:33.105 20:17:45 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:08:33.105 20:17:45 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:33.105 20:17:45 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:08:33.105 20:17:45 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:08:33.105 20:17:45 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:08:33.105 [2024-05-16 20:17:46.023845] ctrlr.c: 816:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562' 00:08:33.105 Failed to write to /dev/nvme-fabrics: Input/output error 00:08:33.105 could not add new controller: failed to write to nvme-fabrics device 00:08:33.105 20:17:46 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:08:33.105 20:17:46 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:33.105 20:17:46 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:33.105 20:17:46 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:33.105 20:17:46 nvmf_rdma.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:08:33.105 20:17:46 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:33.105 20:17:46 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:33.105 20:17:46 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:33.105 20:17:46 nvmf_rdma.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:08:34.040 20:17:47 nvmf_rdma.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:08:34.040 20:17:47 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:08:34.040 20:17:47 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:08:34.040 20:17:47 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:08:34.040 20:17:47 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:08:36.574 20:17:49 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:08:36.574 20:17:49 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:08:36.574 20:17:49 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:08:36.574 20:17:49 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:08:36.574 20:17:49 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:08:36.574 20:17:49 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:08:36.574 20:17:49 nvmf_rdma.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:37.141 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:37.141 20:17:49 nvmf_rdma.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:37.141 20:17:49 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:08:37.141 20:17:49 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:08:37.141 20:17:49 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:37.141 20:17:50 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:08:37.141 20:17:50 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:37.141 20:17:50 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:08:37.141 20:17:50 nvmf_rdma.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:37.141 20:17:50 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:37.141 20:17:50 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:37.141 20:17:50 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:37.141 20:17:50 nvmf_rdma.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:08:37.141 20:17:50 nvmf_rdma.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:08:37.141 20:17:50 nvmf_rdma.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:37.141 20:17:50 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:37.141 20:17:50 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:37.141 20:17:50 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:37.141 20:17:50 nvmf_rdma.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:08:37.141 20:17:50 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:37.141 20:17:50 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:37.141 [2024-05-16 20:17:50.044226] rdma.c:3032:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:08:37.141 20:17:50 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:37.141 20:17:50 nvmf_rdma.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:08:37.141 20:17:50 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:37.141 20:17:50 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:37.141 20:17:50 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:37.141 20:17:50 nvmf_rdma.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:37.141 20:17:50 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:37.141 20:17:50 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:37.141 20:17:50 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:37.141 20:17:50 nvmf_rdma.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:08:38.077 20:17:51 nvmf_rdma.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:08:38.077 20:17:51 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:08:38.077 20:17:51 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:08:38.077 20:17:51 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:08:38.077 20:17:51 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:08:40.614 20:17:53 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:08:40.614 20:17:53 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:08:40.614 20:17:53 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:08:40.614 20:17:53 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:08:40.614 20:17:53 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:08:40.614 20:17:53 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:08:40.614 20:17:53 nvmf_rdma.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:41.246 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:41.246 20:17:53 nvmf_rdma.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:41.246 20:17:53 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:08:41.246 20:17:53 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:08:41.246 20:17:53 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:41.246 20:17:53 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:08:41.246 20:17:53 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:41.246 20:17:53 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:08:41.246 20:17:53 nvmf_rdma.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:41.246 20:17:53 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:41.246 20:17:53 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:41.246 20:17:54 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:41.246 20:17:54 nvmf_rdma.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:41.246 20:17:54 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:41.246 20:17:54 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:41.246 20:17:54 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:41.246 20:17:54 nvmf_rdma.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:08:41.246 20:17:54 nvmf_rdma.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:41.246 20:17:54 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:41.246 20:17:54 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:41.246 20:17:54 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:41.246 20:17:54 nvmf_rdma.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:08:41.246 20:17:54 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:41.246 20:17:54 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:41.246 [2024-05-16 20:17:54.029596] rdma.c:3032:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:08:41.246 20:17:54 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:41.246 20:17:54 nvmf_rdma.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:08:41.246 20:17:54 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:41.246 20:17:54 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:41.246 20:17:54 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:41.246 20:17:54 nvmf_rdma.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:41.246 20:17:54 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:41.247 20:17:54 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:41.247 20:17:54 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:41.247 20:17:54 nvmf_rdma.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:08:42.181 20:17:55 nvmf_rdma.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:08:42.181 20:17:55 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:08:42.181 20:17:55 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:08:42.181 20:17:55 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:08:42.181 20:17:55 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:08:44.084 20:17:57 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:08:44.084 20:17:57 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:08:44.084 20:17:57 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:08:44.084 20:17:57 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:08:44.084 20:17:57 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:08:44.084 20:17:57 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:08:44.084 20:17:57 nvmf_rdma.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:45.018 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:45.018 20:17:57 nvmf_rdma.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:45.018 20:17:57 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:08:45.018 20:17:57 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:08:45.018 20:17:57 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:45.018 20:17:57 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:08:45.018 20:17:57 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:45.018 20:17:57 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:08:45.018 20:17:57 nvmf_rdma.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:45.018 20:17:57 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:45.018 20:17:57 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:45.018 20:17:58 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:45.018 20:17:58 nvmf_rdma.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:45.018 20:17:58 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:45.018 20:17:58 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:45.276 20:17:58 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:45.276 20:17:58 nvmf_rdma.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:08:45.276 20:17:58 nvmf_rdma.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:45.276 20:17:58 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:45.276 20:17:58 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:45.276 20:17:58 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:45.276 20:17:58 nvmf_rdma.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:08:45.276 20:17:58 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:45.276 20:17:58 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:45.277 [2024-05-16 20:17:58.026499] rdma.c:3032:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:08:45.277 20:17:58 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:45.277 20:17:58 nvmf_rdma.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:08:45.277 20:17:58 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:45.277 20:17:58 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:45.277 20:17:58 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:45.277 20:17:58 nvmf_rdma.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:45.277 20:17:58 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:45.277 20:17:58 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:45.277 20:17:58 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:45.277 20:17:58 nvmf_rdma.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:08:46.215 20:17:59 nvmf_rdma.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:08:46.215 20:17:59 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:08:46.215 20:17:59 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:08:46.215 20:17:59 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:08:46.215 20:17:59 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:08:48.118 20:18:01 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:08:48.118 20:18:01 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:08:48.118 20:18:01 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:08:48.118 20:18:01 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:08:48.118 20:18:01 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:08:48.118 20:18:01 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:08:48.118 20:18:01 nvmf_rdma.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:49.055 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:49.055 20:18:01 nvmf_rdma.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:49.055 20:18:01 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:08:49.055 20:18:01 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:08:49.055 20:18:01 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:49.055 20:18:01 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:08:49.055 20:18:01 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:49.055 20:18:01 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:08:49.055 20:18:01 nvmf_rdma.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:49.055 20:18:01 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:49.055 20:18:01 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:49.055 20:18:02 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:49.055 20:18:02 nvmf_rdma.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:49.055 20:18:02 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:49.055 20:18:02 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:49.055 20:18:02 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:49.055 20:18:02 nvmf_rdma.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:08:49.055 20:18:02 nvmf_rdma.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:49.055 20:18:02 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:49.055 20:18:02 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:49.055 20:18:02 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:49.055 20:18:02 nvmf_rdma.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:08:49.055 20:18:02 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:49.055 20:18:02 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:49.055 [2024-05-16 20:18:02.024235] rdma.c:3032:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:08:49.055 20:18:02 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:49.055 20:18:02 nvmf_rdma.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:08:49.055 20:18:02 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:49.055 20:18:02 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:49.055 20:18:02 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:49.055 20:18:02 nvmf_rdma.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:49.055 20:18:02 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:49.055 20:18:02 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:49.055 20:18:02 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:49.055 20:18:02 nvmf_rdma.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:08:50.431 20:18:02 nvmf_rdma.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:08:50.431 20:18:02 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:08:50.431 20:18:02 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:08:50.431 20:18:02 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:08:50.431 20:18:02 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:08:52.336 20:18:04 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:08:52.336 20:18:05 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:08:52.336 20:18:05 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:08:52.336 20:18:05 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:08:52.336 20:18:05 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:08:52.336 20:18:05 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:08:52.336 20:18:05 nvmf_rdma.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:53.272 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:53.272 20:18:05 nvmf_rdma.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:53.272 20:18:05 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:08:53.272 20:18:05 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:08:53.272 20:18:05 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:53.272 20:18:05 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:08:53.272 20:18:05 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:53.272 20:18:05 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:08:53.272 20:18:05 nvmf_rdma.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:53.272 20:18:05 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:53.272 20:18:05 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:53.272 20:18:05 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:53.272 20:18:05 nvmf_rdma.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:53.272 20:18:05 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:53.272 20:18:05 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:53.272 20:18:06 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:53.272 20:18:06 nvmf_rdma.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:08:53.272 20:18:06 nvmf_rdma.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:53.272 20:18:06 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:53.272 20:18:06 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:53.272 20:18:06 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:53.272 20:18:06 nvmf_rdma.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:08:53.272 20:18:06 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:53.272 20:18:06 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:53.272 [2024-05-16 20:18:06.018209] rdma.c:3032:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:08:53.272 20:18:06 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:53.272 20:18:06 nvmf_rdma.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:08:53.272 20:18:06 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:53.272 20:18:06 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:53.272 20:18:06 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:53.272 20:18:06 nvmf_rdma.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:53.272 20:18:06 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:53.272 20:18:06 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:53.272 20:18:06 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:53.272 20:18:06 nvmf_rdma.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:08:54.208 20:18:06 nvmf_rdma.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:08:54.208 20:18:06 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:08:54.208 20:18:06 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:08:54.208 20:18:06 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:08:54.208 20:18:06 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:08:56.112 20:18:09 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:08:56.112 20:18:09 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:08:56.112 20:18:09 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:08:56.112 20:18:09 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:08:56.112 20:18:09 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:08:56.112 20:18:09 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:08:56.112 20:18:09 nvmf_rdma.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:57.048 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:57.049 20:18:09 nvmf_rdma.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:57.049 20:18:09 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:08:57.049 20:18:09 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:08:57.049 20:18:09 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:57.049 20:18:09 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:08:57.049 20:18:09 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:57.049 20:18:10 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:08:57.049 20:18:10 nvmf_rdma.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:57.049 20:18:10 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:57.049 20:18:10 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:57.049 20:18:10 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:57.049 20:18:10 nvmf_rdma.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:57.049 20:18:10 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:57.049 20:18:10 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:57.049 20:18:10 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:57.049 20:18:10 nvmf_rdma.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:08:57.049 20:18:10 nvmf_rdma.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:08:57.049 20:18:10 nvmf_rdma.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:57.049 20:18:10 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:57.049 20:18:10 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:57.049 20:18:10 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:57.049 20:18:10 nvmf_rdma.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:08:57.049 20:18:10 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:57.049 20:18:10 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:57.049 [2024-05-16 20:18:10.040125] rdma.c:3032:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:08:57.309 20:18:10 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:57.309 20:18:10 nvmf_rdma.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:57.309 20:18:10 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:57.309 20:18:10 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:57.309 20:18:10 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:57.309 20:18:10 nvmf_rdma.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:57.309 20:18:10 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:57.309 20:18:10 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:57.309 20:18:10 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:57.309 20:18:10 nvmf_rdma.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:57.309 20:18:10 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:57.309 20:18:10 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:57.309 20:18:10 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:57.309 20:18:10 nvmf_rdma.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:57.309 20:18:10 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:57.309 20:18:10 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:57.309 20:18:10 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:57.309 20:18:10 nvmf_rdma.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:08:57.309 20:18:10 nvmf_rdma.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:57.309 20:18:10 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:57.309 20:18:10 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:57.309 20:18:10 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:57.309 20:18:10 nvmf_rdma.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:08:57.309 20:18:10 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:57.309 20:18:10 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:57.309 [2024-05-16 20:18:10.092353] rdma.c:3032:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:08:57.309 20:18:10 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:57.309 20:18:10 nvmf_rdma.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:57.309 20:18:10 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:57.309 20:18:10 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:57.309 20:18:10 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:57.309 20:18:10 nvmf_rdma.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:57.309 20:18:10 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:57.309 20:18:10 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:57.309 20:18:10 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:57.309 20:18:10 nvmf_rdma.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:57.309 20:18:10 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:57.309 20:18:10 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:57.309 20:18:10 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:57.309 20:18:10 nvmf_rdma.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:57.309 20:18:10 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:57.309 20:18:10 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:57.309 20:18:10 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:57.309 20:18:10 nvmf_rdma.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:08:57.309 20:18:10 nvmf_rdma.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:57.309 20:18:10 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:57.309 20:18:10 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:57.309 20:18:10 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:57.309 20:18:10 nvmf_rdma.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:08:57.309 20:18:10 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:57.309 20:18:10 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:57.309 [2024-05-16 20:18:10.144485] rdma.c:3032:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:08:57.309 20:18:10 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:57.309 20:18:10 nvmf_rdma.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:57.309 20:18:10 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:57.309 20:18:10 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:57.309 20:18:10 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:57.309 20:18:10 nvmf_rdma.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:57.309 20:18:10 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:57.309 20:18:10 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:57.309 20:18:10 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:57.309 20:18:10 nvmf_rdma.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:57.309 20:18:10 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:57.309 20:18:10 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:57.309 20:18:10 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:57.309 20:18:10 nvmf_rdma.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:57.309 20:18:10 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:57.309 20:18:10 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:57.309 20:18:10 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:57.309 20:18:10 nvmf_rdma.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:08:57.309 20:18:10 nvmf_rdma.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:57.309 20:18:10 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:57.309 20:18:10 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:57.309 20:18:10 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:57.309 20:18:10 nvmf_rdma.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:08:57.309 20:18:10 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:57.309 20:18:10 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:57.309 [2024-05-16 20:18:10.192660] rdma.c:3032:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:08:57.309 20:18:10 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:57.309 20:18:10 nvmf_rdma.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:57.309 20:18:10 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:57.309 20:18:10 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:57.309 20:18:10 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:57.309 20:18:10 nvmf_rdma.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:57.309 20:18:10 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:57.309 20:18:10 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:57.309 20:18:10 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:57.309 20:18:10 nvmf_rdma.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:57.309 20:18:10 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:57.309 20:18:10 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:57.309 20:18:10 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:57.309 20:18:10 nvmf_rdma.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:57.309 20:18:10 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:57.309 20:18:10 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:57.309 20:18:10 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:57.309 20:18:10 nvmf_rdma.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:08:57.309 20:18:10 nvmf_rdma.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:57.309 20:18:10 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:57.309 20:18:10 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:57.309 20:18:10 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:57.309 20:18:10 nvmf_rdma.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:08:57.310 20:18:10 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:57.310 20:18:10 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:57.310 [2024-05-16 20:18:10.240871] rdma.c:3032:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:08:57.310 20:18:10 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:57.310 20:18:10 nvmf_rdma.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:57.310 20:18:10 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:57.310 20:18:10 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:57.310 20:18:10 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:57.310 20:18:10 nvmf_rdma.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:57.310 20:18:10 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:57.310 20:18:10 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:57.310 20:18:10 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:57.310 20:18:10 nvmf_rdma.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:57.310 20:18:10 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:57.310 20:18:10 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:57.310 20:18:10 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:57.310 20:18:10 nvmf_rdma.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:57.310 20:18:10 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:57.310 20:18:10 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:57.310 20:18:10 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:57.310 20:18:10 nvmf_rdma.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:08:57.310 20:18:10 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:57.310 20:18:10 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:57.569 20:18:10 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:57.569 20:18:10 nvmf_rdma.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:08:57.569 "tick_rate": 2100000000, 00:08:57.569 "poll_groups": [ 00:08:57.569 { 00:08:57.569 "name": "nvmf_tgt_poll_group_000", 00:08:57.569 "admin_qpairs": 2, 00:08:57.569 "io_qpairs": 27, 00:08:57.569 "current_admin_qpairs": 0, 00:08:57.569 "current_io_qpairs": 0, 00:08:57.569 "pending_bdev_io": 0, 00:08:57.569 "completed_nvme_io": 127, 00:08:57.569 "transports": [ 00:08:57.569 { 00:08:57.569 "trtype": "RDMA", 00:08:57.569 "pending_data_buffer": 0, 00:08:57.569 "devices": [ 00:08:57.569 { 00:08:57.569 "name": "mlx5_0", 00:08:57.569 "polls": 3404614, 00:08:57.569 "idle_polls": 3404287, 00:08:57.569 "completions": 365, 00:08:57.569 "requests": 182, 00:08:57.569 "request_latency": 32929810, 00:08:57.569 "pending_free_request": 0, 00:08:57.569 "pending_rdma_read": 0, 00:08:57.569 "pending_rdma_write": 0, 00:08:57.569 "pending_rdma_send": 0, 00:08:57.569 "total_send_wrs": 309, 00:08:57.569 "send_doorbell_updates": 158, 00:08:57.569 "total_recv_wrs": 4278, 00:08:57.569 "recv_doorbell_updates": 158 00:08:57.569 }, 00:08:57.569 { 00:08:57.569 "name": "mlx5_1", 00:08:57.569 "polls": 3404614, 00:08:57.569 "idle_polls": 3404614, 00:08:57.569 "completions": 0, 00:08:57.569 "requests": 0, 00:08:57.569 "request_latency": 0, 00:08:57.569 "pending_free_request": 0, 00:08:57.569 "pending_rdma_read": 0, 00:08:57.569 "pending_rdma_write": 0, 00:08:57.569 "pending_rdma_send": 0, 00:08:57.569 "total_send_wrs": 0, 00:08:57.569 "send_doorbell_updates": 0, 00:08:57.569 "total_recv_wrs": 4096, 00:08:57.569 "recv_doorbell_updates": 1 00:08:57.569 } 00:08:57.569 ] 00:08:57.569 } 00:08:57.569 ] 00:08:57.569 }, 00:08:57.569 { 00:08:57.569 "name": "nvmf_tgt_poll_group_001", 00:08:57.569 "admin_qpairs": 2, 00:08:57.569 "io_qpairs": 26, 00:08:57.569 "current_admin_qpairs": 0, 00:08:57.569 "current_io_qpairs": 0, 00:08:57.569 "pending_bdev_io": 0, 00:08:57.569 "completed_nvme_io": 126, 00:08:57.569 "transports": [ 00:08:57.569 { 00:08:57.569 "trtype": "RDMA", 00:08:57.569 "pending_data_buffer": 0, 00:08:57.569 "devices": [ 00:08:57.569 { 00:08:57.569 "name": "mlx5_0", 00:08:57.569 "polls": 3446950, 00:08:57.569 "idle_polls": 3446632, 00:08:57.569 "completions": 358, 00:08:57.569 "requests": 179, 00:08:57.569 "request_latency": 31777810, 00:08:57.569 "pending_free_request": 0, 00:08:57.569 "pending_rdma_read": 0, 00:08:57.569 "pending_rdma_write": 0, 00:08:57.569 "pending_rdma_send": 0, 00:08:57.569 "total_send_wrs": 304, 00:08:57.569 "send_doorbell_updates": 154, 00:08:57.569 "total_recv_wrs": 4275, 00:08:57.569 "recv_doorbell_updates": 155 00:08:57.569 }, 00:08:57.569 { 00:08:57.569 "name": "mlx5_1", 00:08:57.569 "polls": 3446950, 00:08:57.569 "idle_polls": 3446950, 00:08:57.569 "completions": 0, 00:08:57.569 "requests": 0, 00:08:57.569 "request_latency": 0, 00:08:57.569 "pending_free_request": 0, 00:08:57.569 "pending_rdma_read": 0, 00:08:57.569 "pending_rdma_write": 0, 00:08:57.569 "pending_rdma_send": 0, 00:08:57.569 "total_send_wrs": 0, 00:08:57.569 "send_doorbell_updates": 0, 00:08:57.569 "total_recv_wrs": 4096, 00:08:57.569 "recv_doorbell_updates": 1 00:08:57.569 } 00:08:57.569 ] 00:08:57.569 } 00:08:57.569 ] 00:08:57.569 }, 00:08:57.569 { 00:08:57.569 "name": "nvmf_tgt_poll_group_002", 00:08:57.569 "admin_qpairs": 1, 00:08:57.569 "io_qpairs": 26, 00:08:57.569 "current_admin_qpairs": 0, 00:08:57.569 "current_io_qpairs": 0, 00:08:57.569 "pending_bdev_io": 0, 00:08:57.569 "completed_nvme_io": 76, 00:08:57.569 "transports": [ 00:08:57.569 { 00:08:57.569 "trtype": "RDMA", 00:08:57.569 "pending_data_buffer": 0, 00:08:57.569 "devices": [ 00:08:57.569 { 00:08:57.569 "name": "mlx5_0", 00:08:57.569 "polls": 3467172, 00:08:57.569 "idle_polls": 3466984, 00:08:57.569 "completions": 209, 00:08:57.569 "requests": 104, 00:08:57.569 "request_latency": 17350542, 00:08:57.569 "pending_free_request": 0, 00:08:57.569 "pending_rdma_read": 0, 00:08:57.569 "pending_rdma_write": 0, 00:08:57.569 "pending_rdma_send": 0, 00:08:57.569 "total_send_wrs": 168, 00:08:57.569 "send_doorbell_updates": 93, 00:08:57.569 "total_recv_wrs": 4200, 00:08:57.569 "recv_doorbell_updates": 93 00:08:57.569 }, 00:08:57.569 { 00:08:57.569 "name": "mlx5_1", 00:08:57.569 "polls": 3467172, 00:08:57.569 "idle_polls": 3467172, 00:08:57.569 "completions": 0, 00:08:57.569 "requests": 0, 00:08:57.569 "request_latency": 0, 00:08:57.569 "pending_free_request": 0, 00:08:57.569 "pending_rdma_read": 0, 00:08:57.569 "pending_rdma_write": 0, 00:08:57.569 "pending_rdma_send": 0, 00:08:57.569 "total_send_wrs": 0, 00:08:57.569 "send_doorbell_updates": 0, 00:08:57.569 "total_recv_wrs": 4096, 00:08:57.569 "recv_doorbell_updates": 1 00:08:57.569 } 00:08:57.569 ] 00:08:57.569 } 00:08:57.569 ] 00:08:57.569 }, 00:08:57.569 { 00:08:57.569 "name": "nvmf_tgt_poll_group_003", 00:08:57.569 "admin_qpairs": 2, 00:08:57.569 "io_qpairs": 26, 00:08:57.569 "current_admin_qpairs": 0, 00:08:57.569 "current_io_qpairs": 0, 00:08:57.569 "pending_bdev_io": 0, 00:08:57.569 "completed_nvme_io": 126, 00:08:57.569 "transports": [ 00:08:57.569 { 00:08:57.569 "trtype": "RDMA", 00:08:57.569 "pending_data_buffer": 0, 00:08:57.569 "devices": [ 00:08:57.569 { 00:08:57.569 "name": "mlx5_0", 00:08:57.569 "polls": 2647842, 00:08:57.569 "idle_polls": 2647530, 00:08:57.569 "completions": 358, 00:08:57.569 "requests": 179, 00:08:57.569 "request_latency": 32913700, 00:08:57.569 "pending_free_request": 0, 00:08:57.570 "pending_rdma_read": 0, 00:08:57.570 "pending_rdma_write": 0, 00:08:57.570 "pending_rdma_send": 0, 00:08:57.570 "total_send_wrs": 304, 00:08:57.570 "send_doorbell_updates": 153, 00:08:57.570 "total_recv_wrs": 4275, 00:08:57.570 "recv_doorbell_updates": 154 00:08:57.570 }, 00:08:57.570 { 00:08:57.570 "name": "mlx5_1", 00:08:57.570 "polls": 2647842, 00:08:57.570 "idle_polls": 2647842, 00:08:57.570 "completions": 0, 00:08:57.570 "requests": 0, 00:08:57.570 "request_latency": 0, 00:08:57.570 "pending_free_request": 0, 00:08:57.570 "pending_rdma_read": 0, 00:08:57.570 "pending_rdma_write": 0, 00:08:57.570 "pending_rdma_send": 0, 00:08:57.570 "total_send_wrs": 0, 00:08:57.570 "send_doorbell_updates": 0, 00:08:57.570 "total_recv_wrs": 4096, 00:08:57.570 "recv_doorbell_updates": 1 00:08:57.570 } 00:08:57.570 ] 00:08:57.570 } 00:08:57.570 ] 00:08:57.570 } 00:08:57.570 ] 00:08:57.570 }' 00:08:57.570 20:18:10 nvmf_rdma.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:08:57.570 20:18:10 nvmf_rdma.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:08:57.570 20:18:10 nvmf_rdma.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:08:57.570 20:18:10 nvmf_rdma.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:08:57.570 20:18:10 nvmf_rdma.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:08:57.570 20:18:10 nvmf_rdma.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:08:57.570 20:18:10 nvmf_rdma.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:08:57.570 20:18:10 nvmf_rdma.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:08:57.570 20:18:10 nvmf_rdma.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:08:57.570 20:18:10 nvmf_rdma.nvmf_rpc -- target/rpc.sh@113 -- # (( 105 > 0 )) 00:08:57.570 20:18:10 nvmf_rdma.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == rdma ']' 00:08:57.570 20:18:10 nvmf_rdma.nvmf_rpc -- target/rpc.sh@117 -- # jsum '.poll_groups[].transports[].devices[].completions' 00:08:57.570 20:18:10 nvmf_rdma.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].transports[].devices[].completions' 00:08:57.570 20:18:10 nvmf_rdma.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].transports[].devices[].completions' 00:08:57.570 20:18:10 nvmf_rdma.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:08:57.570 20:18:10 nvmf_rdma.nvmf_rpc -- target/rpc.sh@117 -- # (( 1290 > 0 )) 00:08:57.570 20:18:10 nvmf_rdma.nvmf_rpc -- target/rpc.sh@118 -- # jsum '.poll_groups[].transports[].devices[].request_latency' 00:08:57.570 20:18:10 nvmf_rdma.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].transports[].devices[].request_latency' 00:08:57.570 20:18:10 nvmf_rdma.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].transports[].devices[].request_latency' 00:08:57.570 20:18:10 nvmf_rdma.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:08:57.570 20:18:10 nvmf_rdma.nvmf_rpc -- target/rpc.sh@118 -- # (( 114971862 > 0 )) 00:08:57.570 20:18:10 nvmf_rdma.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:08:57.570 20:18:10 nvmf_rdma.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:08:57.570 20:18:10 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:57.570 20:18:10 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:08:57.570 20:18:10 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:08:57.570 20:18:10 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:08:57.570 20:18:10 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:08:57.570 20:18:10 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:57.570 20:18:10 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:08:57.570 rmmod nvme_rdma 00:08:57.570 rmmod nvme_fabrics 00:08:57.570 20:18:10 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:57.570 20:18:10 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:08:57.570 20:18:10 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:08:57.570 20:18:10 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 2931140 ']' 00:08:57.570 20:18:10 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 2931140 00:08:57.570 20:18:10 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@946 -- # '[' -z 2931140 ']' 00:08:57.570 20:18:10 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@950 -- # kill -0 2931140 00:08:57.570 20:18:10 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@951 -- # uname 00:08:57.570 20:18:10 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:08:57.570 20:18:10 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2931140 00:08:57.829 20:18:10 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:08:57.829 20:18:10 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:08:57.829 20:18:10 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2931140' 00:08:57.829 killing process with pid 2931140 00:08:57.829 20:18:10 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@965 -- # kill 2931140 00:08:57.829 [2024-05-16 20:18:10.579274] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:08:57.829 20:18:10 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@970 -- # wait 2931140 00:08:57.829 [2024-05-16 20:18:10.657983] rdma.c:2885:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:08:58.089 20:18:10 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:58.089 20:18:10 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:08:58.089 00:08:58.089 real 0m36.366s 00:08:58.089 user 2m2.255s 00:08:58.089 sys 0m5.810s 00:08:58.089 20:18:10 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:58.089 20:18:10 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:58.089 ************************************ 00:08:58.089 END TEST nvmf_rpc 00:08:58.089 ************************************ 00:08:58.089 20:18:10 nvmf_rdma -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=rdma 00:08:58.089 20:18:10 nvmf_rdma -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:08:58.089 20:18:10 nvmf_rdma -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:58.089 20:18:10 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:08:58.089 ************************************ 00:08:58.089 START TEST nvmf_invalid 00:08:58.089 ************************************ 00:08:58.089 20:18:10 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=rdma 00:08:58.089 * Looking for test storage... 00:08:58.089 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:08:58.089 20:18:11 nvmf_rdma.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:08:58.089 20:18:11 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:08:58.089 20:18:11 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:58.089 20:18:11 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:58.089 20:18:11 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:58.089 20:18:11 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:58.089 20:18:11 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:58.089 20:18:11 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:58.089 20:18:11 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:58.089 20:18:11 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:58.089 20:18:11 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:58.089 20:18:11 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:58.089 20:18:11 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:08:58.089 20:18:11 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:08:58.089 20:18:11 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:58.089 20:18:11 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:58.089 20:18:11 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:58.089 20:18:11 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:58.089 20:18:11 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:08:58.089 20:18:11 nvmf_rdma.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:58.089 20:18:11 nvmf_rdma.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:58.089 20:18:11 nvmf_rdma.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:58.089 20:18:11 nvmf_rdma.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:58.089 20:18:11 nvmf_rdma.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:58.089 20:18:11 nvmf_rdma.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:58.089 20:18:11 nvmf_rdma.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:08:58.089 20:18:11 nvmf_rdma.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:58.089 20:18:11 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:08:58.089 20:18:11 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:58.089 20:18:11 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:58.089 20:18:11 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:58.089 20:18:11 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:58.089 20:18:11 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:58.089 20:18:11 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:58.089 20:18:11 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:58.089 20:18:11 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:58.089 20:18:11 nvmf_rdma.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:08:58.089 20:18:11 nvmf_rdma.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:08:58.089 20:18:11 nvmf_rdma.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:08:58.089 20:18:11 nvmf_rdma.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:08:58.089 20:18:11 nvmf_rdma.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:08:58.089 20:18:11 nvmf_rdma.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:08:58.089 20:18:11 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:08:58.089 20:18:11 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:58.089 20:18:11 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:58.089 20:18:11 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:58.089 20:18:11 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:58.089 20:18:11 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:58.089 20:18:11 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:58.089 20:18:11 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:58.089 20:18:11 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:58.089 20:18:11 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:58.089 20:18:11 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@285 -- # xtrace_disable 00:08:58.089 20:18:11 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:09:04.653 20:18:16 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:04.653 20:18:16 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@291 -- # pci_devs=() 00:09:04.653 20:18:16 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:04.653 20:18:16 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:04.653 20:18:16 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:04.653 20:18:16 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:04.653 20:18:16 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:04.653 20:18:16 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@295 -- # net_devs=() 00:09:04.653 20:18:16 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:04.653 20:18:16 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@296 -- # e810=() 00:09:04.653 20:18:16 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@296 -- # local -ga e810 00:09:04.653 20:18:16 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@297 -- # x722=() 00:09:04.653 20:18:16 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@297 -- # local -ga x722 00:09:04.653 20:18:16 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@298 -- # mlx=() 00:09:04.653 20:18:16 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@298 -- # local -ga mlx 00:09:04.653 20:18:16 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:04.653 20:18:16 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:04.653 20:18:16 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:04.653 20:18:16 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:04.653 20:18:16 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:04.653 20:18:16 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:04.653 20:18:16 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:04.653 20:18:16 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:04.653 20:18:16 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:04.653 20:18:16 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:04.653 20:18:16 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:04.653 20:18:16 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:04.653 20:18:16 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:09:04.653 20:18:16 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:09:04.653 20:18:16 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:09:04.653 20:18:16 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:09:04.653 20:18:16 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:09:04.653 20:18:16 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:04.653 20:18:16 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:04.653 20:18:16 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:09:04.653 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:09:04.653 20:18:16 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:09:04.653 20:18:16 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:09:04.653 20:18:16 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:04.653 20:18:16 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:04.653 20:18:16 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:09:04.653 20:18:16 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:09:04.653 20:18:16 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:04.653 20:18:16 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:09:04.653 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:09:04.653 20:18:16 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:09:04.653 20:18:16 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:09:04.653 20:18:16 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:04.654 20:18:16 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:04.654 20:18:16 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:09:04.654 20:18:16 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:09:04.654 20:18:16 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:04.654 20:18:16 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:09:04.654 20:18:16 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:04.654 20:18:16 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:04.654 20:18:16 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:09:04.654 20:18:16 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:04.654 20:18:16 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:04.654 20:18:16 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:09:04.654 Found net devices under 0000:da:00.0: mlx_0_0 00:09:04.654 20:18:16 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:04.654 20:18:16 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:04.654 20:18:16 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:04.654 20:18:16 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:09:04.654 20:18:16 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:04.654 20:18:16 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:04.654 20:18:16 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:09:04.654 Found net devices under 0000:da:00.1: mlx_0_1 00:09:04.654 20:18:16 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:04.654 20:18:16 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:04.654 20:18:16 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@414 -- # is_hw=yes 00:09:04.654 20:18:16 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:04.654 20:18:16 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:09:04.654 20:18:16 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:09:04.654 20:18:16 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@420 -- # rdma_device_init 00:09:04.654 20:18:16 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:09:04.654 20:18:16 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@58 -- # uname 00:09:04.654 20:18:16 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:09:04.654 20:18:16 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@62 -- # modprobe ib_cm 00:09:04.654 20:18:16 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@63 -- # modprobe ib_core 00:09:04.654 20:18:16 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@64 -- # modprobe ib_umad 00:09:04.654 20:18:16 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:09:04.654 20:18:16 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@66 -- # modprobe iw_cm 00:09:04.654 20:18:16 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:09:04.654 20:18:16 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:09:04.654 20:18:16 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@502 -- # allocate_nic_ips 00:09:04.654 20:18:16 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:09:04.654 20:18:16 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@73 -- # get_rdma_if_list 00:09:04.654 20:18:16 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:04.654 20:18:16 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:09:04.654 20:18:16 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:09:04.654 20:18:16 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:04.654 20:18:16 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:09:04.654 20:18:16 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:04.654 20:18:16 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:04.654 20:18:16 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:04.654 20:18:16 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@104 -- # echo mlx_0_0 00:09:04.654 20:18:16 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@105 -- # continue 2 00:09:04.654 20:18:16 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:04.654 20:18:16 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:04.654 20:18:16 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:04.654 20:18:16 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:04.654 20:18:16 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:04.654 20:18:16 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@104 -- # echo mlx_0_1 00:09:04.654 20:18:16 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@105 -- # continue 2 00:09:04.654 20:18:16 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:09:04.654 20:18:16 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:09:04.654 20:18:16 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:09:04.654 20:18:16 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:09:04.654 20:18:16 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:04.654 20:18:16 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:04.654 20:18:16 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:09:04.654 20:18:16 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:09:04.654 20:18:16 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:09:04.654 258: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:04.654 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:09:04.654 altname enp218s0f0np0 00:09:04.654 altname ens818f0np0 00:09:04.654 inet 192.168.100.8/24 scope global mlx_0_0 00:09:04.654 valid_lft forever preferred_lft forever 00:09:04.654 20:18:16 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:09:04.654 20:18:16 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:09:04.654 20:18:16 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:09:04.654 20:18:16 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:09:04.654 20:18:16 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:04.654 20:18:16 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:04.654 20:18:16 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:09:04.654 20:18:16 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:09:04.654 20:18:16 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:09:04.654 259: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:04.654 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:09:04.654 altname enp218s0f1np1 00:09:04.654 altname ens818f1np1 00:09:04.654 inet 192.168.100.9/24 scope global mlx_0_1 00:09:04.654 valid_lft forever preferred_lft forever 00:09:04.654 20:18:16 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@422 -- # return 0 00:09:04.654 20:18:16 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:04.654 20:18:16 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:09:04.654 20:18:16 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:09:04.654 20:18:16 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:09:04.654 20:18:16 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@86 -- # get_rdma_if_list 00:09:04.654 20:18:16 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:04.654 20:18:16 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:09:04.654 20:18:16 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:09:04.654 20:18:16 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:04.654 20:18:16 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:09:04.654 20:18:16 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:04.654 20:18:16 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:04.654 20:18:16 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:04.654 20:18:16 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@104 -- # echo mlx_0_0 00:09:04.654 20:18:16 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@105 -- # continue 2 00:09:04.654 20:18:16 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:04.654 20:18:16 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:04.654 20:18:16 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:04.654 20:18:16 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:04.654 20:18:16 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:04.654 20:18:16 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@104 -- # echo mlx_0_1 00:09:04.654 20:18:16 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@105 -- # continue 2 00:09:04.654 20:18:16 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:09:04.654 20:18:16 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:09:04.654 20:18:16 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:09:04.654 20:18:16 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:09:04.654 20:18:16 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:04.654 20:18:16 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:04.654 20:18:16 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:09:04.654 20:18:16 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:09:04.654 20:18:16 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:09:04.654 20:18:16 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:09:04.654 20:18:16 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:04.654 20:18:16 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:04.654 20:18:16 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:09:04.654 192.168.100.9' 00:09:04.654 20:18:16 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:09:04.654 192.168.100.9' 00:09:04.654 20:18:16 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@457 -- # head -n 1 00:09:04.654 20:18:16 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:09:04.654 20:18:16 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:09:04.654 192.168.100.9' 00:09:04.654 20:18:16 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@458 -- # tail -n +2 00:09:04.654 20:18:16 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@458 -- # head -n 1 00:09:04.654 20:18:16 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:09:04.654 20:18:16 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:09:04.655 20:18:16 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:09:04.655 20:18:16 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:09:04.655 20:18:16 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:09:04.655 20:18:16 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:09:04.655 20:18:16 nvmf_rdma.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:09:04.655 20:18:16 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:04.655 20:18:16 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@720 -- # xtrace_disable 00:09:04.655 20:18:16 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:09:04.655 20:18:16 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=2940187 00:09:04.655 20:18:16 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 2940187 00:09:04.655 20:18:16 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:04.655 20:18:16 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@827 -- # '[' -z 2940187 ']' 00:09:04.655 20:18:16 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:04.655 20:18:16 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@832 -- # local max_retries=100 00:09:04.655 20:18:16 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:04.655 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:04.655 20:18:16 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@836 -- # xtrace_disable 00:09:04.655 20:18:16 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:09:04.655 [2024-05-16 20:18:16.637911] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:09:04.655 [2024-05-16 20:18:16.637955] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:04.655 EAL: No free 2048 kB hugepages reported on node 1 00:09:04.655 [2024-05-16 20:18:16.690176] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:04.655 [2024-05-16 20:18:16.764997] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:04.655 [2024-05-16 20:18:16.765032] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:04.655 [2024-05-16 20:18:16.765039] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:04.655 [2024-05-16 20:18:16.765045] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:04.655 [2024-05-16 20:18:16.765051] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:04.655 [2024-05-16 20:18:16.765088] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:04.655 [2024-05-16 20:18:16.765208] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:04.655 [2024-05-16 20:18:16.765305] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:04.655 [2024-05-16 20:18:16.765305] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:04.655 20:18:17 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:09:04.655 20:18:17 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@860 -- # return 0 00:09:04.655 20:18:17 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:04.655 20:18:17 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:04.655 20:18:17 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:09:04.655 20:18:17 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:04.655 20:18:17 nvmf_rdma.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:09:04.655 20:18:17 nvmf_rdma.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode13158 00:09:04.655 [2024-05-16 20:18:17.639879] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:09:04.914 20:18:17 nvmf_rdma.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:09:04.914 { 00:09:04.914 "nqn": "nqn.2016-06.io.spdk:cnode13158", 00:09:04.914 "tgt_name": "foobar", 00:09:04.914 "method": "nvmf_create_subsystem", 00:09:04.914 "req_id": 1 00:09:04.914 } 00:09:04.914 Got JSON-RPC error response 00:09:04.914 response: 00:09:04.914 { 00:09:04.914 "code": -32603, 00:09:04.914 "message": "Unable to find target foobar" 00:09:04.914 }' 00:09:04.914 20:18:17 nvmf_rdma.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:09:04.914 { 00:09:04.914 "nqn": "nqn.2016-06.io.spdk:cnode13158", 00:09:04.914 "tgt_name": "foobar", 00:09:04.914 "method": "nvmf_create_subsystem", 00:09:04.914 "req_id": 1 00:09:04.914 } 00:09:04.914 Got JSON-RPC error response 00:09:04.914 response: 00:09:04.914 { 00:09:04.914 "code": -32603, 00:09:04.914 "message": "Unable to find target foobar" 00:09:04.914 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:09:04.914 20:18:17 nvmf_rdma.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:09:04.914 20:18:17 nvmf_rdma.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode8743 00:09:04.914 [2024-05-16 20:18:17.828544] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode8743: invalid serial number 'SPDKISFASTANDAWESOME' 00:09:04.914 20:18:17 nvmf_rdma.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:09:04.914 { 00:09:04.914 "nqn": "nqn.2016-06.io.spdk:cnode8743", 00:09:04.914 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:09:04.914 "method": "nvmf_create_subsystem", 00:09:04.914 "req_id": 1 00:09:04.914 } 00:09:04.914 Got JSON-RPC error response 00:09:04.914 response: 00:09:04.914 { 00:09:04.914 "code": -32602, 00:09:04.914 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:09:04.914 }' 00:09:04.914 20:18:17 nvmf_rdma.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:09:04.914 { 00:09:04.914 "nqn": "nqn.2016-06.io.spdk:cnode8743", 00:09:04.914 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:09:04.914 "method": "nvmf_create_subsystem", 00:09:04.914 "req_id": 1 00:09:04.914 } 00:09:04.914 Got JSON-RPC error response 00:09:04.914 response: 00:09:04.914 { 00:09:04.914 "code": -32602, 00:09:04.914 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:09:04.915 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:09:04.915 20:18:17 nvmf_rdma.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:09:04.915 20:18:17 nvmf_rdma.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode4155 00:09:05.174 [2024-05-16 20:18:18.017124] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode4155: invalid model number 'SPDK_Controller' 00:09:05.174 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:09:05.174 { 00:09:05.174 "nqn": "nqn.2016-06.io.spdk:cnode4155", 00:09:05.174 "model_number": "SPDK_Controller\u001f", 00:09:05.174 "method": "nvmf_create_subsystem", 00:09:05.174 "req_id": 1 00:09:05.174 } 00:09:05.174 Got JSON-RPC error response 00:09:05.174 response: 00:09:05.174 { 00:09:05.174 "code": -32602, 00:09:05.174 "message": "Invalid MN SPDK_Controller\u001f" 00:09:05.174 }' 00:09:05.174 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:09:05.174 { 00:09:05.174 "nqn": "nqn.2016-06.io.spdk:cnode4155", 00:09:05.174 "model_number": "SPDK_Controller\u001f", 00:09:05.174 "method": "nvmf_create_subsystem", 00:09:05.174 "req_id": 1 00:09:05.174 } 00:09:05.174 Got JSON-RPC error response 00:09:05.174 response: 00:09:05.174 { 00:09:05.174 "code": -32602, 00:09:05.174 "message": "Invalid MN SPDK_Controller\u001f" 00:09:05.174 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:09:05.174 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:09:05.174 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:09:05.174 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:09:05.174 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:09:05.174 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:09:05.174 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:09:05.174 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:05.174 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:09:05.174 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:09:05.174 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:09:05.174 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:05.174 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:05.174 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:09:05.174 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:09:05.174 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:09:05.174 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:05.174 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:05.174 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:09:05.174 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:09:05.174 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:09:05.174 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:05.174 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:05.174 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:09:05.174 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:09:05.174 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:09:05.174 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:05.174 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:05.174 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:09:05.174 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:09:05.174 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:09:05.174 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:05.174 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:05.174 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:09:05.174 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:09:05.174 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:09:05.174 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:05.174 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:05.174 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:09:05.174 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:09:05.174 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:09:05.174 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:05.174 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:05.174 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:09:05.174 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:09:05.174 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:09:05.174 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:05.174 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:05.174 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:09:05.174 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:09:05.174 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:09:05.174 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:05.174 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:05.174 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:09:05.174 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:09:05.174 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:09:05.174 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:05.174 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:05.174 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:09:05.174 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:09:05.174 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:09:05.174 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:05.174 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:05.174 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:09:05.174 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:09:05.174 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:09:05.174 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:05.174 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:05.174 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:09:05.174 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:09:05.174 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:09:05.174 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:05.174 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:05.174 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:09:05.174 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:09:05.174 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:09:05.174 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:05.174 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:05.174 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:09:05.174 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:09:05.174 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:09:05.174 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:05.174 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:05.174 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:09:05.174 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:09:05.174 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:09:05.174 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:05.174 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:05.174 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:09:05.174 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:09:05.174 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:09:05.175 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:05.175 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:05.175 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:09:05.175 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:09:05.175 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:09:05.175 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:05.175 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:05.434 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:09:05.434 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:09:05.434 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:09:05.434 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:05.434 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:05.434 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:09:05.434 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:09:05.434 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:09:05.434 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:05.434 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:05.434 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:09:05.434 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:09:05.434 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:09:05.434 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:05.434 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:05.434 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@28 -- # [[ I == \- ]] 00:09:05.434 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@31 -- # echo 'I^eML`1&[s=H$mVxy,(Gd' 00:09:05.434 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'I^eML`1&[s=H$mVxy,(Gd' nqn.2016-06.io.spdk:cnode16682 00:09:05.434 [2024-05-16 20:18:18.338210] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode16682: invalid serial number 'I^eML`1&[s=H$mVxy,(Gd' 00:09:05.434 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:09:05.434 { 00:09:05.434 "nqn": "nqn.2016-06.io.spdk:cnode16682", 00:09:05.434 "serial_number": "I^eML`1&[s=H$mVxy,(Gd", 00:09:05.434 "method": "nvmf_create_subsystem", 00:09:05.434 "req_id": 1 00:09:05.434 } 00:09:05.434 Got JSON-RPC error response 00:09:05.434 response: 00:09:05.434 { 00:09:05.434 "code": -32602, 00:09:05.434 "message": "Invalid SN I^eML`1&[s=H$mVxy,(Gd" 00:09:05.434 }' 00:09:05.434 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:09:05.434 { 00:09:05.434 "nqn": "nqn.2016-06.io.spdk:cnode16682", 00:09:05.434 "serial_number": "I^eML`1&[s=H$mVxy,(Gd", 00:09:05.434 "method": "nvmf_create_subsystem", 00:09:05.434 "req_id": 1 00:09:05.434 } 00:09:05.434 Got JSON-RPC error response 00:09:05.434 response: 00:09:05.434 { 00:09:05.434 "code": -32602, 00:09:05.434 "message": "Invalid SN I^eML`1&[s=H$mVxy,(Gd" 00:09:05.434 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:09:05.434 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:09:05.434 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:09:05.434 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:09:05.434 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:09:05.434 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:09:05.434 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:09:05.434 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:05.434 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:09:05.434 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:09:05.434 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:09:05.434 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:05.434 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:05.434 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:09:05.434 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:09:05.434 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:09:05.434 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:05.434 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:05.434 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:09:05.434 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:09:05.434 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:09:05.434 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:05.434 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:05.434 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:09:05.434 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:09:05.434 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:09:05.434 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:05.434 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:05.434 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:09:05.434 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:09:05.434 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:09:05.434 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:05.434 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:05.434 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:09:05.434 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:09:05.434 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:09:05.434 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:05.434 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:05.434 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:09:05.434 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:09:05.434 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:09:05.434 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:05.434 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:05.434 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:09:05.434 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:09:05.434 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:09:05.434 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:05.435 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:05.694 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:09:05.694 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:09:05.694 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:09:05.694 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:05.694 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:05.694 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:09:05.694 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:09:05.694 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:09:05.694 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:05.694 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:05.694 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:09:05.694 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:09:05.694 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:09:05.694 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:05.694 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:05.694 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:09:05.694 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:09:05.694 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:09:05.694 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:05.694 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:05.694 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:09:05.694 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:09:05.694 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:09:05.694 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:05.694 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:05.694 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:09:05.694 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:09:05.694 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:09:05.694 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:05.694 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:05.694 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:09:05.695 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:09:05.695 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:09:05.695 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:05.695 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:05.695 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:09:05.695 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:09:05.695 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:09:05.695 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:05.695 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:05.695 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:09:05.695 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:09:05.695 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:09:05.695 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:05.695 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:05.695 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:09:05.695 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:09:05.695 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:09:05.695 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:05.695 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:05.695 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:09:05.695 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:09:05.695 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:09:05.695 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:05.695 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:05.695 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:09:05.695 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:09:05.695 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:09:05.695 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:05.695 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:05.695 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:09:05.695 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:09:05.695 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:09:05.695 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:05.695 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:05.695 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:09:05.695 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:09:05.695 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:09:05.695 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:05.695 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:05.695 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:09:05.695 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:09:05.695 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:09:05.695 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:05.695 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:05.695 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:09:05.695 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:09:05.695 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:09:05.695 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:05.695 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:05.695 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:09:05.695 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:09:05.695 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:09:05.695 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:05.695 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:05.695 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:09:05.695 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:09:05.695 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:09:05.695 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:05.695 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:05.695 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:09:05.695 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:09:05.695 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:09:05.695 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:05.695 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:05.695 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:09:05.695 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:09:05.695 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:09:05.695 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:05.695 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:05.695 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:09:05.695 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:09:05.695 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:09:05.695 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:05.695 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:05.695 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:09:05.695 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:09:05.695 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:09:05.695 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:05.695 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:05.695 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:09:05.695 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:09:05.695 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:09:05.695 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:05.695 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:05.695 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:09:05.695 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:09:05.695 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:09:05.695 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:05.695 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:05.695 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:09:05.695 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:09:05.695 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:09:05.695 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:05.695 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:05.695 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:09:05.695 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:09:05.695 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:09:05.695 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:05.695 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:05.695 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:09:05.695 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:09:05.695 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:09:05.695 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:05.695 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:05.695 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:09:05.695 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:09:05.695 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:09:05.695 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:05.695 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:05.695 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:09:05.695 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:09:05.695 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:09:05.695 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:05.695 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:05.695 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:09:05.695 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:09:05.695 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:09:05.695 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:05.695 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:05.695 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:09:05.695 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:09:05.695 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:09:05.695 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:05.695 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:05.695 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:09:05.695 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:09:05.695 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:09:05.695 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:05.695 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:05.695 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:09:05.695 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:09:05.695 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:09:05.695 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:05.695 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:05.695 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@28 -- # [[ q == \- ]] 00:09:05.696 20:18:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@31 -- # echo 'q(9)LEn[hPoM;>/X;1Xv*=ONUl#g|JQfg-VXZ&}/X;1Xv*=ONUl#g|JQfg-VXZ&}/X;1Xv*=ONUl#g|JQfg-VXZ&}/X;1Xv*=ONUl#g|JQfg-VXZ&}/X;1Xv*=ONUl#g|JQfg-VXZ&} /dev/null' 00:09:08.289 20:18:21 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:08.289 20:18:21 nvmf_rdma.nvmf_abort -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:08.289 20:18:21 nvmf_rdma.nvmf_abort -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:08.289 20:18:21 nvmf_rdma.nvmf_abort -- nvmf/common.sh@285 -- # xtrace_disable 00:09:08.289 20:18:21 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:14.857 20:18:26 nvmf_rdma.nvmf_abort -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:14.857 20:18:26 nvmf_rdma.nvmf_abort -- nvmf/common.sh@291 -- # pci_devs=() 00:09:14.857 20:18:26 nvmf_rdma.nvmf_abort -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:14.857 20:18:26 nvmf_rdma.nvmf_abort -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:14.857 20:18:26 nvmf_rdma.nvmf_abort -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:14.857 20:18:26 nvmf_rdma.nvmf_abort -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:14.857 20:18:26 nvmf_rdma.nvmf_abort -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:14.857 20:18:26 nvmf_rdma.nvmf_abort -- nvmf/common.sh@295 -- # net_devs=() 00:09:14.857 20:18:26 nvmf_rdma.nvmf_abort -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:14.857 20:18:26 nvmf_rdma.nvmf_abort -- nvmf/common.sh@296 -- # e810=() 00:09:14.857 20:18:26 nvmf_rdma.nvmf_abort -- nvmf/common.sh@296 -- # local -ga e810 00:09:14.857 20:18:26 nvmf_rdma.nvmf_abort -- nvmf/common.sh@297 -- # x722=() 00:09:14.857 20:18:26 nvmf_rdma.nvmf_abort -- nvmf/common.sh@297 -- # local -ga x722 00:09:14.857 20:18:26 nvmf_rdma.nvmf_abort -- nvmf/common.sh@298 -- # mlx=() 00:09:14.857 20:18:26 nvmf_rdma.nvmf_abort -- nvmf/common.sh@298 -- # local -ga mlx 00:09:14.857 20:18:26 nvmf_rdma.nvmf_abort -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:14.857 20:18:26 nvmf_rdma.nvmf_abort -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:14.857 20:18:26 nvmf_rdma.nvmf_abort -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:14.857 20:18:26 nvmf_rdma.nvmf_abort -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:14.857 20:18:26 nvmf_rdma.nvmf_abort -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:14.857 20:18:26 nvmf_rdma.nvmf_abort -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:14.857 20:18:26 nvmf_rdma.nvmf_abort -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:14.857 20:18:26 nvmf_rdma.nvmf_abort -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:14.857 20:18:26 nvmf_rdma.nvmf_abort -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:14.857 20:18:26 nvmf_rdma.nvmf_abort -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:14.857 20:18:26 nvmf_rdma.nvmf_abort -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:14.857 20:18:26 nvmf_rdma.nvmf_abort -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:14.857 20:18:26 nvmf_rdma.nvmf_abort -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:09:14.857 20:18:26 nvmf_rdma.nvmf_abort -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:09:14.857 20:18:26 nvmf_rdma.nvmf_abort -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:09:14.857 20:18:26 nvmf_rdma.nvmf_abort -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:09:14.857 20:18:26 nvmf_rdma.nvmf_abort -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:09:14.857 20:18:26 nvmf_rdma.nvmf_abort -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:14.857 20:18:26 nvmf_rdma.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:14.857 20:18:26 nvmf_rdma.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:09:14.857 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:09:14.857 20:18:26 nvmf_rdma.nvmf_abort -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:09:14.857 20:18:26 nvmf_rdma.nvmf_abort -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:09:14.857 20:18:26 nvmf_rdma.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:14.857 20:18:26 nvmf_rdma.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:14.857 20:18:26 nvmf_rdma.nvmf_abort -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:09:14.857 20:18:26 nvmf_rdma.nvmf_abort -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:09:14.857 20:18:26 nvmf_rdma.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:14.857 20:18:26 nvmf_rdma.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:09:14.857 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:09:14.857 20:18:26 nvmf_rdma.nvmf_abort -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:09:14.857 20:18:26 nvmf_rdma.nvmf_abort -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:09:14.857 20:18:26 nvmf_rdma.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:14.857 20:18:26 nvmf_rdma.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:14.857 20:18:26 nvmf_rdma.nvmf_abort -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:09:14.857 20:18:26 nvmf_rdma.nvmf_abort -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:09:14.857 20:18:26 nvmf_rdma.nvmf_abort -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:14.857 20:18:26 nvmf_rdma.nvmf_abort -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:09:14.857 20:18:26 nvmf_rdma.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:14.858 20:18:26 nvmf_rdma.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:14.858 20:18:26 nvmf_rdma.nvmf_abort -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:09:14.858 20:18:26 nvmf_rdma.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:14.858 20:18:26 nvmf_rdma.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:14.858 20:18:26 nvmf_rdma.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:09:14.858 Found net devices under 0000:da:00.0: mlx_0_0 00:09:14.858 20:18:26 nvmf_rdma.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:14.858 20:18:26 nvmf_rdma.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:14.858 20:18:26 nvmf_rdma.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:14.858 20:18:26 nvmf_rdma.nvmf_abort -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:09:14.858 20:18:26 nvmf_rdma.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:14.858 20:18:26 nvmf_rdma.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:14.858 20:18:26 nvmf_rdma.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:09:14.858 Found net devices under 0000:da:00.1: mlx_0_1 00:09:14.858 20:18:26 nvmf_rdma.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:14.858 20:18:26 nvmf_rdma.nvmf_abort -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:14.858 20:18:26 nvmf_rdma.nvmf_abort -- nvmf/common.sh@414 -- # is_hw=yes 00:09:14.858 20:18:26 nvmf_rdma.nvmf_abort -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:14.858 20:18:26 nvmf_rdma.nvmf_abort -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:09:14.858 20:18:26 nvmf_rdma.nvmf_abort -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:09:14.858 20:18:26 nvmf_rdma.nvmf_abort -- nvmf/common.sh@420 -- # rdma_device_init 00:09:14.858 20:18:26 nvmf_rdma.nvmf_abort -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:09:14.858 20:18:26 nvmf_rdma.nvmf_abort -- nvmf/common.sh@58 -- # uname 00:09:14.858 20:18:26 nvmf_rdma.nvmf_abort -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:09:14.858 20:18:26 nvmf_rdma.nvmf_abort -- nvmf/common.sh@62 -- # modprobe ib_cm 00:09:14.858 20:18:26 nvmf_rdma.nvmf_abort -- nvmf/common.sh@63 -- # modprobe ib_core 00:09:14.858 20:18:26 nvmf_rdma.nvmf_abort -- nvmf/common.sh@64 -- # modprobe ib_umad 00:09:14.858 20:18:26 nvmf_rdma.nvmf_abort -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:09:14.858 20:18:26 nvmf_rdma.nvmf_abort -- nvmf/common.sh@66 -- # modprobe iw_cm 00:09:14.858 20:18:26 nvmf_rdma.nvmf_abort -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:09:14.858 20:18:26 nvmf_rdma.nvmf_abort -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:09:14.858 20:18:26 nvmf_rdma.nvmf_abort -- nvmf/common.sh@502 -- # allocate_nic_ips 00:09:14.858 20:18:26 nvmf_rdma.nvmf_abort -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:09:14.858 20:18:26 nvmf_rdma.nvmf_abort -- nvmf/common.sh@73 -- # get_rdma_if_list 00:09:14.858 20:18:26 nvmf_rdma.nvmf_abort -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:14.858 20:18:26 nvmf_rdma.nvmf_abort -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:09:14.858 20:18:26 nvmf_rdma.nvmf_abort -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:09:14.858 20:18:26 nvmf_rdma.nvmf_abort -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:14.858 20:18:26 nvmf_rdma.nvmf_abort -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:09:14.858 20:18:26 nvmf_rdma.nvmf_abort -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:14.858 20:18:26 nvmf_rdma.nvmf_abort -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:14.858 20:18:26 nvmf_rdma.nvmf_abort -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:14.858 20:18:26 nvmf_rdma.nvmf_abort -- nvmf/common.sh@104 -- # echo mlx_0_0 00:09:14.858 20:18:26 nvmf_rdma.nvmf_abort -- nvmf/common.sh@105 -- # continue 2 00:09:14.858 20:18:26 nvmf_rdma.nvmf_abort -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:14.858 20:18:26 nvmf_rdma.nvmf_abort -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:14.858 20:18:26 nvmf_rdma.nvmf_abort -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:14.858 20:18:26 nvmf_rdma.nvmf_abort -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:14.858 20:18:26 nvmf_rdma.nvmf_abort -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:14.858 20:18:26 nvmf_rdma.nvmf_abort -- nvmf/common.sh@104 -- # echo mlx_0_1 00:09:14.858 20:18:26 nvmf_rdma.nvmf_abort -- nvmf/common.sh@105 -- # continue 2 00:09:14.858 20:18:26 nvmf_rdma.nvmf_abort -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:09:14.858 20:18:26 nvmf_rdma.nvmf_abort -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:09:14.858 20:18:26 nvmf_rdma.nvmf_abort -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:09:14.858 20:18:26 nvmf_rdma.nvmf_abort -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:09:14.858 20:18:26 nvmf_rdma.nvmf_abort -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:14.858 20:18:26 nvmf_rdma.nvmf_abort -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:14.858 20:18:26 nvmf_rdma.nvmf_abort -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:09:14.858 20:18:26 nvmf_rdma.nvmf_abort -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:09:14.858 20:18:26 nvmf_rdma.nvmf_abort -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:09:14.858 258: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:14.858 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:09:14.858 altname enp218s0f0np0 00:09:14.858 altname ens818f0np0 00:09:14.858 inet 192.168.100.8/24 scope global mlx_0_0 00:09:14.858 valid_lft forever preferred_lft forever 00:09:14.858 20:18:26 nvmf_rdma.nvmf_abort -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:09:14.858 20:18:26 nvmf_rdma.nvmf_abort -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:09:14.858 20:18:26 nvmf_rdma.nvmf_abort -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:09:14.858 20:18:26 nvmf_rdma.nvmf_abort -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:09:14.858 20:18:26 nvmf_rdma.nvmf_abort -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:14.858 20:18:26 nvmf_rdma.nvmf_abort -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:14.858 20:18:26 nvmf_rdma.nvmf_abort -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:09:14.858 20:18:26 nvmf_rdma.nvmf_abort -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:09:14.858 20:18:26 nvmf_rdma.nvmf_abort -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:09:14.858 259: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:14.858 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:09:14.858 altname enp218s0f1np1 00:09:14.858 altname ens818f1np1 00:09:14.858 inet 192.168.100.9/24 scope global mlx_0_1 00:09:14.858 valid_lft forever preferred_lft forever 00:09:14.858 20:18:26 nvmf_rdma.nvmf_abort -- nvmf/common.sh@422 -- # return 0 00:09:14.858 20:18:26 nvmf_rdma.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:14.858 20:18:26 nvmf_rdma.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:09:14.858 20:18:26 nvmf_rdma.nvmf_abort -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:09:14.858 20:18:26 nvmf_rdma.nvmf_abort -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:09:14.858 20:18:27 nvmf_rdma.nvmf_abort -- nvmf/common.sh@86 -- # get_rdma_if_list 00:09:14.858 20:18:27 nvmf_rdma.nvmf_abort -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:14.858 20:18:27 nvmf_rdma.nvmf_abort -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:09:14.858 20:18:27 nvmf_rdma.nvmf_abort -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:09:14.858 20:18:27 nvmf_rdma.nvmf_abort -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:14.858 20:18:27 nvmf_rdma.nvmf_abort -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:09:14.858 20:18:27 nvmf_rdma.nvmf_abort -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:14.858 20:18:27 nvmf_rdma.nvmf_abort -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:14.858 20:18:27 nvmf_rdma.nvmf_abort -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:14.858 20:18:27 nvmf_rdma.nvmf_abort -- nvmf/common.sh@104 -- # echo mlx_0_0 00:09:14.858 20:18:27 nvmf_rdma.nvmf_abort -- nvmf/common.sh@105 -- # continue 2 00:09:14.858 20:18:27 nvmf_rdma.nvmf_abort -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:14.858 20:18:27 nvmf_rdma.nvmf_abort -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:14.858 20:18:27 nvmf_rdma.nvmf_abort -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:14.858 20:18:27 nvmf_rdma.nvmf_abort -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:14.858 20:18:27 nvmf_rdma.nvmf_abort -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:14.858 20:18:27 nvmf_rdma.nvmf_abort -- nvmf/common.sh@104 -- # echo mlx_0_1 00:09:14.858 20:18:27 nvmf_rdma.nvmf_abort -- nvmf/common.sh@105 -- # continue 2 00:09:14.858 20:18:27 nvmf_rdma.nvmf_abort -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:09:14.858 20:18:27 nvmf_rdma.nvmf_abort -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:09:14.858 20:18:27 nvmf_rdma.nvmf_abort -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:09:14.858 20:18:27 nvmf_rdma.nvmf_abort -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:09:14.858 20:18:27 nvmf_rdma.nvmf_abort -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:14.858 20:18:27 nvmf_rdma.nvmf_abort -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:14.858 20:18:27 nvmf_rdma.nvmf_abort -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:09:14.858 20:18:27 nvmf_rdma.nvmf_abort -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:09:14.858 20:18:27 nvmf_rdma.nvmf_abort -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:09:14.858 20:18:27 nvmf_rdma.nvmf_abort -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:14.858 20:18:27 nvmf_rdma.nvmf_abort -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:09:14.858 20:18:27 nvmf_rdma.nvmf_abort -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:14.858 20:18:27 nvmf_rdma.nvmf_abort -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:09:14.858 192.168.100.9' 00:09:14.858 20:18:27 nvmf_rdma.nvmf_abort -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:09:14.858 192.168.100.9' 00:09:14.858 20:18:27 nvmf_rdma.nvmf_abort -- nvmf/common.sh@457 -- # head -n 1 00:09:14.858 20:18:27 nvmf_rdma.nvmf_abort -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:09:14.858 20:18:27 nvmf_rdma.nvmf_abort -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:09:14.858 192.168.100.9' 00:09:14.858 20:18:27 nvmf_rdma.nvmf_abort -- nvmf/common.sh@458 -- # tail -n +2 00:09:14.858 20:18:27 nvmf_rdma.nvmf_abort -- nvmf/common.sh@458 -- # head -n 1 00:09:14.858 20:18:27 nvmf_rdma.nvmf_abort -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:09:14.858 20:18:27 nvmf_rdma.nvmf_abort -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:09:14.858 20:18:27 nvmf_rdma.nvmf_abort -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:09:14.858 20:18:27 nvmf_rdma.nvmf_abort -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:09:14.858 20:18:27 nvmf_rdma.nvmf_abort -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:09:14.858 20:18:27 nvmf_rdma.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:09:14.858 20:18:27 nvmf_rdma.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:09:14.858 20:18:27 nvmf_rdma.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:14.858 20:18:27 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@720 -- # xtrace_disable 00:09:14.858 20:18:27 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:14.858 20:18:27 nvmf_rdma.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=2944620 00:09:14.858 20:18:27 nvmf_rdma.nvmf_abort -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:09:14.858 20:18:27 nvmf_rdma.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 2944620 00:09:14.858 20:18:27 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@827 -- # '[' -z 2944620 ']' 00:09:14.858 20:18:27 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:14.858 20:18:27 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@832 -- # local max_retries=100 00:09:14.859 20:18:27 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:14.859 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:14.859 20:18:27 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@836 -- # xtrace_disable 00:09:14.859 20:18:27 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:14.859 [2024-05-16 20:18:27.126730] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:09:14.859 [2024-05-16 20:18:27.126777] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:14.859 EAL: No free 2048 kB hugepages reported on node 1 00:09:14.859 [2024-05-16 20:18:27.187100] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:14.859 [2024-05-16 20:18:27.265320] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:14.859 [2024-05-16 20:18:27.265355] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:14.859 [2024-05-16 20:18:27.265362] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:14.859 [2024-05-16 20:18:27.265368] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:14.859 [2024-05-16 20:18:27.265373] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:14.859 [2024-05-16 20:18:27.265469] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:14.859 [2024-05-16 20:18:27.265554] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:14.859 [2024-05-16 20:18:27.265555] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:15.118 20:18:27 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:09:15.118 20:18:27 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@860 -- # return 0 00:09:15.118 20:18:27 nvmf_rdma.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:15.118 20:18:27 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:15.118 20:18:27 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:15.118 20:18:27 nvmf_rdma.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:15.118 20:18:27 nvmf_rdma.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -a 256 00:09:15.118 20:18:27 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:15.118 20:18:27 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:15.118 [2024-05-16 20:18:28.005536] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xf8b110/0xf8f600) succeed. 00:09:15.118 [2024-05-16 20:18:28.015810] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xf8c6b0/0xfd0c90) succeed. 00:09:15.376 20:18:28 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:15.376 20:18:28 nvmf_rdma.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:09:15.376 20:18:28 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:15.376 20:18:28 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:15.376 Malloc0 00:09:15.376 20:18:28 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:15.376 20:18:28 nvmf_rdma.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:15.376 20:18:28 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:15.376 20:18:28 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:15.376 Delay0 00:09:15.376 20:18:28 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:15.376 20:18:28 nvmf_rdma.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:15.376 20:18:28 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:15.376 20:18:28 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:15.376 20:18:28 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:15.376 20:18:28 nvmf_rdma.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:09:15.376 20:18:28 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:15.376 20:18:28 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:15.376 20:18:28 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:15.376 20:18:28 nvmf_rdma.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:09:15.376 20:18:28 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:15.377 20:18:28 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:15.377 [2024-05-16 20:18:28.158495] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:09:15.377 [2024-05-16 20:18:28.158869] rdma.c:3032:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:09:15.377 20:18:28 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:15.377 20:18:28 nvmf_rdma.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:09:15.377 20:18:28 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:15.377 20:18:28 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:15.377 20:18:28 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:15.377 20:18:28 nvmf_rdma.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/abort -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:09:15.377 EAL: No free 2048 kB hugepages reported on node 1 00:09:15.377 [2024-05-16 20:18:28.240880] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:09:17.911 Initializing NVMe Controllers 00:09:17.911 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode0 00:09:17.911 controller IO queue size 128 less than required 00:09:17.911 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:09:17.911 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:09:17.911 Initialization complete. Launching workers. 00:09:17.911 NS: RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 51267 00:09:17.911 CTRLR: RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 51328, failed to submit 62 00:09:17.911 success 51268, unsuccess 60, failed 0 00:09:17.911 20:18:30 nvmf_rdma.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:17.911 20:18:30 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:17.911 20:18:30 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:17.911 20:18:30 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:17.911 20:18:30 nvmf_rdma.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:09:17.911 20:18:30 nvmf_rdma.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:09:17.911 20:18:30 nvmf_rdma.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:17.911 20:18:30 nvmf_rdma.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:09:17.911 20:18:30 nvmf_rdma.nvmf_abort -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:09:17.911 20:18:30 nvmf_rdma.nvmf_abort -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:09:17.911 20:18:30 nvmf_rdma.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:09:17.911 20:18:30 nvmf_rdma.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:17.911 20:18:30 nvmf_rdma.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:09:17.911 rmmod nvme_rdma 00:09:17.911 rmmod nvme_fabrics 00:09:17.911 20:18:30 nvmf_rdma.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:17.911 20:18:30 nvmf_rdma.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:09:17.911 20:18:30 nvmf_rdma.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:09:17.911 20:18:30 nvmf_rdma.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 2944620 ']' 00:09:17.911 20:18:30 nvmf_rdma.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 2944620 00:09:17.911 20:18:30 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@946 -- # '[' -z 2944620 ']' 00:09:17.911 20:18:30 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@950 -- # kill -0 2944620 00:09:17.911 20:18:30 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@951 -- # uname 00:09:17.911 20:18:30 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:09:17.911 20:18:30 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2944620 00:09:17.911 20:18:30 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:09:17.911 20:18:30 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:09:17.911 20:18:30 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2944620' 00:09:17.911 killing process with pid 2944620 00:09:17.911 20:18:30 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@965 -- # kill 2944620 00:09:17.911 [2024-05-16 20:18:30.437087] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:09:17.911 20:18:30 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@970 -- # wait 2944620 00:09:17.911 [2024-05-16 20:18:30.500927] rdma.c:2885:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:09:17.911 20:18:30 nvmf_rdma.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:17.911 20:18:30 nvmf_rdma.nvmf_abort -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:09:17.911 00:09:17.911 real 0m9.687s 00:09:17.911 user 0m14.116s 00:09:17.911 sys 0m4.839s 00:09:17.911 20:18:30 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:17.911 20:18:30 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:17.911 ************************************ 00:09:17.911 END TEST nvmf_abort 00:09:17.911 ************************************ 00:09:17.911 20:18:30 nvmf_rdma -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=rdma 00:09:17.911 20:18:30 nvmf_rdma -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:09:17.911 20:18:30 nvmf_rdma -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:17.911 20:18:30 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:09:17.911 ************************************ 00:09:17.911 START TEST nvmf_ns_hotplug_stress 00:09:17.911 ************************************ 00:09:17.911 20:18:30 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=rdma 00:09:17.911 * Looking for test storage... 00:09:17.911 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:09:17.911 20:18:30 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:09:17.911 20:18:30 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:09:17.911 20:18:30 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:17.911 20:18:30 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:17.911 20:18:30 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:17.911 20:18:30 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:17.911 20:18:30 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:17.911 20:18:30 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:17.911 20:18:30 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:17.911 20:18:30 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:17.911 20:18:30 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:17.911 20:18:30 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:17.911 20:18:30 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:09:17.911 20:18:30 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:09:17.911 20:18:30 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:17.911 20:18:30 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:17.911 20:18:30 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:17.912 20:18:30 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:17.912 20:18:30 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:09:17.912 20:18:30 nvmf_rdma.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:17.912 20:18:30 nvmf_rdma.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:17.912 20:18:30 nvmf_rdma.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:17.912 20:18:30 nvmf_rdma.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:17.912 20:18:30 nvmf_rdma.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:17.912 20:18:30 nvmf_rdma.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:17.912 20:18:30 nvmf_rdma.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:09:17.912 20:18:30 nvmf_rdma.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:17.912 20:18:30 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:09:17.912 20:18:30 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:17.912 20:18:30 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:17.912 20:18:30 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:17.912 20:18:30 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:17.912 20:18:30 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:17.912 20:18:30 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:17.912 20:18:30 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:17.912 20:18:30 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:17.912 20:18:30 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:09:17.912 20:18:30 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:09:17.912 20:18:30 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:09:17.912 20:18:30 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:17.912 20:18:30 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:17.912 20:18:30 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:17.912 20:18:30 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:17.912 20:18:30 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:17.912 20:18:30 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:17.912 20:18:30 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:17.912 20:18:30 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:17.912 20:18:30 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:17.912 20:18:30 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:09:17.912 20:18:30 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:09:24.537 20:18:36 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:24.537 20:18:36 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:09:24.537 20:18:36 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:24.537 20:18:36 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:24.537 20:18:36 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:24.537 20:18:36 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:24.537 20:18:36 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:24.537 20:18:36 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # net_devs=() 00:09:24.537 20:18:36 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:24.537 20:18:36 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # e810=() 00:09:24.537 20:18:36 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # local -ga e810 00:09:24.537 20:18:36 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # x722=() 00:09:24.537 20:18:36 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # local -ga x722 00:09:24.537 20:18:36 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # mlx=() 00:09:24.537 20:18:36 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:09:24.537 20:18:36 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:24.537 20:18:36 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:24.537 20:18:36 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:24.537 20:18:36 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:24.537 20:18:36 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:24.537 20:18:36 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:24.537 20:18:36 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:24.537 20:18:36 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:24.537 20:18:36 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:24.537 20:18:36 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:24.537 20:18:36 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:24.537 20:18:36 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:24.537 20:18:36 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:09:24.537 20:18:36 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:09:24.537 20:18:36 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:09:24.537 20:18:36 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:09:24.537 20:18:36 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:09:24.537 20:18:36 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:24.537 20:18:36 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:24.537 20:18:36 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:09:24.537 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:09:24.537 20:18:36 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:09:24.537 20:18:36 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:09:24.537 20:18:36 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:24.537 20:18:36 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:24.537 20:18:36 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:09:24.537 20:18:36 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:09:24.537 20:18:36 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:24.537 20:18:36 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:09:24.537 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:09:24.537 20:18:36 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:09:24.537 20:18:36 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:09:24.537 20:18:36 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:24.537 20:18:36 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:24.537 20:18:36 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:09:24.537 20:18:36 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:09:24.537 20:18:36 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:24.537 20:18:36 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:09:24.537 20:18:36 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:24.537 20:18:36 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:24.537 20:18:36 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:09:24.537 20:18:36 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:24.537 20:18:36 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:24.537 20:18:36 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:09:24.537 Found net devices under 0000:da:00.0: mlx_0_0 00:09:24.537 20:18:36 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:24.537 20:18:36 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:24.537 20:18:36 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:24.537 20:18:36 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:09:24.537 20:18:36 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:24.537 20:18:36 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:24.537 20:18:36 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:09:24.537 Found net devices under 0000:da:00.1: mlx_0_1 00:09:24.537 20:18:36 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:24.537 20:18:36 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:24.537 20:18:36 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:09:24.537 20:18:36 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:24.537 20:18:36 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:09:24.537 20:18:36 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:09:24.537 20:18:36 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@420 -- # rdma_device_init 00:09:24.537 20:18:36 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:09:24.537 20:18:36 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@58 -- # uname 00:09:24.537 20:18:36 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:09:24.537 20:18:36 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@62 -- # modprobe ib_cm 00:09:24.537 20:18:36 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@63 -- # modprobe ib_core 00:09:24.537 20:18:36 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@64 -- # modprobe ib_umad 00:09:24.537 20:18:36 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:09:24.537 20:18:36 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@66 -- # modprobe iw_cm 00:09:24.537 20:18:36 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:09:24.537 20:18:36 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:09:24.537 20:18:36 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # allocate_nic_ips 00:09:24.537 20:18:36 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:09:24.537 20:18:36 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@73 -- # get_rdma_if_list 00:09:24.537 20:18:36 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:24.537 20:18:36 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:09:24.537 20:18:36 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:09:24.537 20:18:36 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:24.537 20:18:36 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:09:24.537 20:18:36 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:24.537 20:18:36 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:24.537 20:18:36 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:24.537 20:18:36 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@104 -- # echo mlx_0_0 00:09:24.537 20:18:36 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@105 -- # continue 2 00:09:24.537 20:18:36 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:24.537 20:18:36 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:24.537 20:18:36 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:24.537 20:18:36 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:24.537 20:18:36 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:24.537 20:18:36 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@104 -- # echo mlx_0_1 00:09:24.537 20:18:36 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@105 -- # continue 2 00:09:24.537 20:18:36 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:09:24.537 20:18:36 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:09:24.537 20:18:36 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:09:24.537 20:18:36 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:09:24.537 20:18:36 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:24.537 20:18:36 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:24.537 20:18:36 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:09:24.537 20:18:36 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:09:24.537 20:18:36 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:09:24.537 258: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:24.537 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:09:24.537 altname enp218s0f0np0 00:09:24.537 altname ens818f0np0 00:09:24.537 inet 192.168.100.8/24 scope global mlx_0_0 00:09:24.537 valid_lft forever preferred_lft forever 00:09:24.537 20:18:36 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:09:24.537 20:18:36 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:09:24.537 20:18:36 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:09:24.537 20:18:36 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:09:24.537 20:18:36 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:24.537 20:18:36 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:24.537 20:18:36 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:09:24.537 20:18:36 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:09:24.537 20:18:36 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:09:24.537 259: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:24.537 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:09:24.537 altname enp218s0f1np1 00:09:24.537 altname ens818f1np1 00:09:24.537 inet 192.168.100.9/24 scope global mlx_0_1 00:09:24.537 valid_lft forever preferred_lft forever 00:09:24.537 20:18:36 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # return 0 00:09:24.537 20:18:36 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:24.537 20:18:36 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:09:24.537 20:18:36 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:09:24.537 20:18:36 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:09:24.537 20:18:36 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@86 -- # get_rdma_if_list 00:09:24.537 20:18:36 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:24.537 20:18:36 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:09:24.537 20:18:36 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:09:24.537 20:18:36 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:24.537 20:18:36 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:09:24.537 20:18:36 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:24.537 20:18:36 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:24.537 20:18:36 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:24.537 20:18:36 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@104 -- # echo mlx_0_0 00:09:24.537 20:18:36 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@105 -- # continue 2 00:09:24.537 20:18:36 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:24.537 20:18:36 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:24.537 20:18:36 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:24.537 20:18:36 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:24.537 20:18:36 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:24.537 20:18:36 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@104 -- # echo mlx_0_1 00:09:24.537 20:18:36 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@105 -- # continue 2 00:09:24.537 20:18:36 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:09:24.537 20:18:36 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:09:24.537 20:18:36 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:09:24.537 20:18:36 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:09:24.537 20:18:36 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:24.537 20:18:36 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:24.537 20:18:36 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:09:24.537 20:18:36 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:09:24.537 20:18:36 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:09:24.537 20:18:36 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:09:24.537 20:18:36 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:24.537 20:18:36 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:24.537 20:18:36 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:09:24.537 192.168.100.9' 00:09:24.537 20:18:36 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:09:24.537 192.168.100.9' 00:09:24.537 20:18:36 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@457 -- # head -n 1 00:09:24.537 20:18:36 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:09:24.537 20:18:36 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:09:24.537 192.168.100.9' 00:09:24.537 20:18:36 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@458 -- # tail -n +2 00:09:24.537 20:18:36 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@458 -- # head -n 1 00:09:24.537 20:18:36 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:09:24.537 20:18:36 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:09:24.537 20:18:36 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:09:24.537 20:18:36 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:09:24.537 20:18:36 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:09:24.537 20:18:36 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:09:24.537 20:18:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:09:24.537 20:18:36 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:24.537 20:18:36 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@720 -- # xtrace_disable 00:09:24.537 20:18:36 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:09:24.537 20:18:36 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=2948553 00:09:24.537 20:18:36 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 2948553 00:09:24.537 20:18:36 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:09:24.537 20:18:36 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@827 -- # '[' -z 2948553 ']' 00:09:24.537 20:18:36 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:24.537 20:18:36 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@832 -- # local max_retries=100 00:09:24.537 20:18:36 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:24.537 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:24.537 20:18:36 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # xtrace_disable 00:09:24.537 20:18:36 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:09:24.537 [2024-05-16 20:18:36.769015] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:09:24.537 [2024-05-16 20:18:36.769065] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:24.537 EAL: No free 2048 kB hugepages reported on node 1 00:09:24.537 [2024-05-16 20:18:36.832122] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:24.537 [2024-05-16 20:18:36.912127] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:24.537 [2024-05-16 20:18:36.912162] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:24.537 [2024-05-16 20:18:36.912169] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:24.537 [2024-05-16 20:18:36.912175] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:24.537 [2024-05-16 20:18:36.912180] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:24.537 [2024-05-16 20:18:36.912277] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:24.537 [2024-05-16 20:18:36.912339] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:24.537 [2024-05-16 20:18:36.912340] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:24.797 20:18:37 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:09:24.797 20:18:37 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@860 -- # return 0 00:09:24.797 20:18:37 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:24.797 20:18:37 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:24.797 20:18:37 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:09:24.797 20:18:37 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:24.797 20:18:37 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:09:24.797 20:18:37 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:09:25.056 [2024-05-16 20:18:37.790514] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1ebb110/0x1ebf600) succeed. 00:09:25.056 [2024-05-16 20:18:37.800643] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1ebc6b0/0x1f00c90) succeed. 00:09:25.056 20:18:37 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:25.316 20:18:38 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:09:25.316 [2024-05-16 20:18:38.276140] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:09:25.316 [2024-05-16 20:18:38.276461] rdma.c:3032:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:09:25.316 20:18:38 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:09:25.574 20:18:38 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:09:25.834 Malloc0 00:09:25.834 20:18:38 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:25.834 Delay0 00:09:26.094 20:18:38 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:26.094 20:18:39 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:09:26.352 NULL1 00:09:26.352 20:18:39 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:09:26.611 20:18:39 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=2948970 00:09:26.611 20:18:39 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:09:26.611 20:18:39 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2948970 00:09:26.611 20:18:39 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:26.611 EAL: No free 2048 kB hugepages reported on node 1 00:09:27.546 Read completed with error (sct=0, sc=11) 00:09:27.805 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:27.805 20:18:40 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:27.805 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:27.805 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:27.805 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:27.805 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:27.805 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:27.805 20:18:40 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:09:27.805 20:18:40 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:09:28.064 true 00:09:28.064 20:18:40 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2948970 00:09:28.064 20:18:40 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:29.002 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:29.002 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:29.002 20:18:41 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:29.002 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:29.002 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:29.002 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:29.002 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:29.002 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:29.002 20:18:41 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:09:29.002 20:18:41 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:09:29.261 true 00:09:29.261 20:18:42 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2948970 00:09:29.261 20:18:42 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:30.196 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:30.196 20:18:42 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:30.196 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:30.196 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:30.196 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:30.196 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:30.196 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:30.196 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:30.196 20:18:43 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:09:30.196 20:18:43 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:09:30.454 true 00:09:30.454 20:18:43 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2948970 00:09:30.454 20:18:43 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:31.390 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:31.390 20:18:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:31.390 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:31.390 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:31.390 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:31.390 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:31.390 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:31.390 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:31.390 20:18:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:09:31.390 20:18:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:09:31.649 true 00:09:31.649 20:18:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2948970 00:09:31.649 20:18:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:32.586 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:32.586 20:18:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:32.586 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:32.586 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:32.586 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:32.586 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:32.586 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:32.586 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:32.586 20:18:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:09:32.586 20:18:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:09:32.845 true 00:09:32.845 20:18:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2948970 00:09:32.845 20:18:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:33.780 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:33.780 20:18:46 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:33.780 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:33.780 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:33.780 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:33.780 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:33.780 20:18:46 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:09:33.780 20:18:46 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:09:34.039 true 00:09:34.039 20:18:46 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2948970 00:09:34.039 20:18:46 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:34.975 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:34.975 20:18:47 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:34.975 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:34.975 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:34.975 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:34.975 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:34.975 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:34.975 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:34.975 20:18:47 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:09:34.975 20:18:47 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:09:35.234 true 00:09:35.234 20:18:48 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2948970 00:09:35.234 20:18:48 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:36.170 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:36.170 20:18:48 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:36.170 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:36.170 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:36.170 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:36.170 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:36.170 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:36.170 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:36.170 20:18:49 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:09:36.170 20:18:49 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:09:36.430 true 00:09:36.430 20:18:49 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2948970 00:09:36.430 20:18:49 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:37.365 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:37.365 20:18:50 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:37.365 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:37.365 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:37.365 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:37.365 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:37.365 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:37.365 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:37.365 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:37.365 20:18:50 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:09:37.365 20:18:50 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:09:37.623 true 00:09:37.623 20:18:50 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2948970 00:09:37.623 20:18:50 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:38.559 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:38.559 20:18:51 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:38.559 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:38.559 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:38.559 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:38.559 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:38.559 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:38.559 20:18:51 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:09:38.559 20:18:51 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:09:38.817 true 00:09:38.817 20:18:51 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2948970 00:09:38.817 20:18:51 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:39.753 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:39.753 20:18:52 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:39.753 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:39.753 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:39.753 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:39.753 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:39.753 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:39.753 20:18:52 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:09:39.753 20:18:52 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:09:40.011 true 00:09:40.011 20:18:52 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2948970 00:09:40.011 20:18:52 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:40.947 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:40.947 20:18:53 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:40.947 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:40.947 20:18:53 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:09:40.947 20:18:53 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:09:41.205 true 00:09:41.205 20:18:54 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2948970 00:09:41.205 20:18:54 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:41.463 20:18:54 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:41.463 20:18:54 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:09:41.463 20:18:54 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:09:41.723 true 00:09:41.723 20:18:54 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2948970 00:09:41.723 20:18:54 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:43.097 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:43.097 20:18:55 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:43.097 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:43.097 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:43.097 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:43.097 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:43.097 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:43.097 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:43.097 20:18:55 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:09:43.097 20:18:55 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:09:43.097 true 00:09:43.356 20:18:56 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2948970 00:09:43.356 20:18:56 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:43.923 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:43.923 20:18:56 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:44.182 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:44.182 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:44.182 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:44.182 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:44.182 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:44.182 20:18:57 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:09:44.182 20:18:57 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:09:44.440 true 00:09:44.440 20:18:57 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2948970 00:09:44.440 20:18:57 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:45.374 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:45.374 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:45.374 20:18:58 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:45.374 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:45.374 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:45.374 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:45.374 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:45.374 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:45.374 20:18:58 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:09:45.374 20:18:58 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:09:45.632 true 00:09:45.632 20:18:58 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2948970 00:09:45.632 20:18:58 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:46.567 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:46.567 20:18:59 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:46.567 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:46.567 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:46.567 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:46.567 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:46.567 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:46.567 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:46.567 20:18:59 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:09:46.567 20:18:59 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:09:46.824 true 00:09:46.824 20:18:59 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2948970 00:09:46.824 20:18:59 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:47.759 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:47.759 20:19:00 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:47.759 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:47.759 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:47.759 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:47.759 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:47.759 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:47.759 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:47.759 20:19:00 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:09:47.759 20:19:00 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:09:48.017 true 00:09:48.017 20:19:00 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2948970 00:09:48.017 20:19:00 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:48.952 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:48.952 20:19:01 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:48.952 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:48.952 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:48.952 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:48.952 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:48.952 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:48.952 20:19:01 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:09:48.952 20:19:01 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:09:49.211 true 00:09:49.211 20:19:02 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2948970 00:09:49.211 20:19:02 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:50.146 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:50.146 20:19:02 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:50.146 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:50.146 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:50.146 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:50.146 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:50.146 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:50.146 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:50.146 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:50.405 20:19:03 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:09:50.405 20:19:03 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:09:50.405 true 00:09:50.405 20:19:03 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2948970 00:09:50.405 20:19:03 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:51.419 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:51.419 20:19:04 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:51.419 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:51.419 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:51.419 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:51.419 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:51.419 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:51.419 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:51.419 20:19:04 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:09:51.419 20:19:04 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:09:51.677 true 00:09:51.677 20:19:04 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2948970 00:09:51.677 20:19:04 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:52.609 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:52.609 20:19:05 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:52.609 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:52.609 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:52.609 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:52.609 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:52.610 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:52.610 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:52.610 20:19:05 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:09:52.610 20:19:05 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:09:52.867 true 00:09:52.867 20:19:05 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2948970 00:09:52.867 20:19:05 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:53.802 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:53.802 20:19:06 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:53.802 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:53.802 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:53.802 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:53.802 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:53.802 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:53.802 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:53.802 20:19:06 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:09:53.802 20:19:06 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:09:54.060 true 00:09:54.060 20:19:06 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2948970 00:09:54.060 20:19:06 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:54.995 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:54.995 20:19:07 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:54.995 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:54.995 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:54.995 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:54.995 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:54.995 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:54.995 20:19:07 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:09:54.995 20:19:07 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:09:55.253 true 00:09:55.253 20:19:08 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2948970 00:09:55.253 20:19:08 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:56.186 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:56.186 20:19:09 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:56.186 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:56.186 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:56.186 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:56.186 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:56.186 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:56.186 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:56.444 20:19:09 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:09:56.444 20:19:09 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:09:56.444 true 00:09:56.444 20:19:09 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2948970 00:09:56.444 20:19:09 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:57.379 20:19:10 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:57.637 20:19:10 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:09:57.637 20:19:10 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:09:57.637 true 00:09:57.637 20:19:10 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2948970 00:09:57.637 20:19:10 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:57.895 20:19:10 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:58.153 20:19:10 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:09:58.153 20:19:10 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:09:58.153 true 00:09:58.153 20:19:11 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2948970 00:09:58.153 20:19:11 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:58.411 20:19:11 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:58.669 20:19:11 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:09:58.669 20:19:11 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:09:58.669 true 00:09:58.669 20:19:11 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2948970 00:09:58.669 20:19:11 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:58.927 20:19:11 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:59.185 20:19:11 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:09:59.185 20:19:11 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:09:59.185 true 00:09:59.186 20:19:12 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2948970 00:09:59.186 20:19:12 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:59.444 Initializing NVMe Controllers 00:09:59.444 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:09:59.444 Controller IO queue size 128, less than required. 00:09:59.444 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:59.444 Controller IO queue size 128, less than required. 00:09:59.444 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:59.444 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:09:59.444 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:09:59.444 Initialization complete. Launching workers. 00:09:59.444 ======================================================== 00:09:59.444 Latency(us) 00:09:59.444 Device Information : IOPS MiB/s Average min max 00:09:59.444 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 5337.37 2.61 21551.98 901.97 1137697.13 00:09:59.444 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 32990.46 16.11 3879.80 2288.60 293751.63 00:09:59.444 ======================================================== 00:09:59.444 Total : 38327.82 18.71 6340.75 901.97 1137697.13 00:09:59.444 00:09:59.444 20:19:12 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:59.703 20:19:12 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:09:59.703 20:19:12 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:09:59.703 true 00:09:59.962 20:19:12 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2948970 00:09:59.962 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (2948970) - No such process 00:09:59.962 20:19:12 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 2948970 00:09:59.962 20:19:12 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:59.962 20:19:12 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:00.220 20:19:13 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:10:00.220 20:19:13 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:10:00.220 20:19:13 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:10:00.220 20:19:13 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:00.220 20:19:13 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:10:00.478 null0 00:10:00.478 20:19:13 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:00.478 20:19:13 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:00.478 20:19:13 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:10:00.478 null1 00:10:00.478 20:19:13 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:00.478 20:19:13 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:00.478 20:19:13 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:10:00.736 null2 00:10:00.736 20:19:13 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:00.736 20:19:13 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:00.736 20:19:13 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:10:00.994 null3 00:10:00.994 20:19:13 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:00.994 20:19:13 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:00.994 20:19:13 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:10:00.994 null4 00:10:00.995 20:19:13 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:00.995 20:19:13 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:00.995 20:19:13 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:10:01.253 null5 00:10:01.253 20:19:14 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:01.253 20:19:14 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:01.253 20:19:14 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:10:01.511 null6 00:10:01.511 20:19:14 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:01.511 20:19:14 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:01.511 20:19:14 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:10:01.511 null7 00:10:01.511 20:19:14 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:01.511 20:19:14 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:01.511 20:19:14 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:10:01.511 20:19:14 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:01.769 20:19:14 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:01.769 20:19:14 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:01.769 20:19:14 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:10:01.770 20:19:14 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:01.770 20:19:14 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:10:01.770 20:19:14 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:01.770 20:19:14 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:01.770 20:19:14 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:10:01.770 20:19:14 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:01.770 20:19:14 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:01.770 20:19:14 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:10:01.770 20:19:14 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:01.770 20:19:14 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:01.770 20:19:14 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:01.770 20:19:14 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:01.770 20:19:14 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:01.770 20:19:14 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:01.770 20:19:14 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:01.770 20:19:14 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:10:01.770 20:19:14 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:01.770 20:19:14 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:10:01.770 20:19:14 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:01.770 20:19:14 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:01.770 20:19:14 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:01.770 20:19:14 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:01.770 20:19:14 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:01.770 20:19:14 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:10:01.770 20:19:14 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:01.770 20:19:14 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:10:01.770 20:19:14 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:01.770 20:19:14 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:01.770 20:19:14 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:01.770 20:19:14 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:01.770 20:19:14 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:01.770 20:19:14 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:10:01.770 20:19:14 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:01.770 20:19:14 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:10:01.770 20:19:14 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:01.770 20:19:14 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:01.770 20:19:14 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:01.770 20:19:14 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:01.770 20:19:14 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:01.770 20:19:14 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:10:01.770 20:19:14 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:01.770 20:19:14 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:10:01.770 20:19:14 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:01.770 20:19:14 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:01.770 20:19:14 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:01.770 20:19:14 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:01.770 20:19:14 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:01.770 20:19:14 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:01.770 20:19:14 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:10:01.770 20:19:14 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:10:01.770 20:19:14 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:01.770 20:19:14 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:01.770 20:19:14 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:01.770 20:19:14 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:01.770 20:19:14 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:01.770 20:19:14 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 2955006 2955007 2955010 2955011 2955013 2955015 2955017 2955018 00:10:01.770 20:19:14 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:10:01.770 20:19:14 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:01.770 20:19:14 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:10:01.770 20:19:14 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:01.770 20:19:14 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:01.770 20:19:14 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:01.770 20:19:14 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:01.770 20:19:14 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:01.770 20:19:14 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:01.770 20:19:14 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:01.770 20:19:14 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:01.770 20:19:14 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:01.770 20:19:14 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:01.770 20:19:14 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:02.029 20:19:14 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:02.029 20:19:14 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:02.029 20:19:14 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:02.029 20:19:14 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:02.029 20:19:14 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:02.029 20:19:14 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:02.029 20:19:14 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:02.029 20:19:14 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:02.029 20:19:14 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:02.029 20:19:14 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:02.029 20:19:14 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:02.029 20:19:14 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:02.029 20:19:14 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:02.029 20:19:14 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:02.029 20:19:14 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:02.029 20:19:14 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:02.029 20:19:14 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:02.029 20:19:14 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:02.029 20:19:14 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:02.029 20:19:14 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:02.029 20:19:14 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:02.029 20:19:14 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:02.029 20:19:14 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:02.029 20:19:14 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:02.288 20:19:15 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:02.288 20:19:15 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:02.288 20:19:15 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:02.288 20:19:15 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:02.288 20:19:15 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:02.288 20:19:15 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:02.288 20:19:15 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:02.288 20:19:15 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:02.288 20:19:15 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:02.288 20:19:15 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:02.288 20:19:15 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:02.288 20:19:15 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:02.288 20:19:15 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:02.288 20:19:15 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:02.288 20:19:15 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:02.288 20:19:15 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:02.288 20:19:15 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:02.288 20:19:15 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:02.288 20:19:15 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:02.288 20:19:15 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:02.288 20:19:15 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:02.288 20:19:15 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:02.288 20:19:15 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:02.288 20:19:15 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:02.288 20:19:15 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:02.288 20:19:15 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:02.288 20:19:15 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:02.288 20:19:15 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:02.288 20:19:15 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:02.546 20:19:15 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:02.546 20:19:15 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:02.546 20:19:15 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:02.546 20:19:15 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:02.547 20:19:15 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:02.547 20:19:15 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:02.547 20:19:15 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:02.547 20:19:15 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:02.547 20:19:15 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:02.547 20:19:15 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:02.547 20:19:15 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:02.806 20:19:15 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:02.806 20:19:15 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:02.806 20:19:15 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:02.806 20:19:15 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:02.806 20:19:15 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:02.806 20:19:15 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:02.806 20:19:15 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:02.806 20:19:15 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:02.806 20:19:15 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:02.806 20:19:15 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:02.806 20:19:15 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:02.806 20:19:15 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:02.806 20:19:15 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:02.806 20:19:15 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:02.806 20:19:15 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:02.806 20:19:15 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:02.806 20:19:15 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:02.806 20:19:15 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:02.806 20:19:15 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:02.806 20:19:15 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:02.806 20:19:15 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:02.806 20:19:15 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:02.806 20:19:15 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:02.806 20:19:15 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:03.064 20:19:15 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:03.064 20:19:15 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:03.064 20:19:15 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:03.064 20:19:15 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:03.064 20:19:15 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:03.064 20:19:15 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:03.064 20:19:15 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:03.065 20:19:15 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:03.065 20:19:16 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:03.065 20:19:16 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:03.065 20:19:16 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:03.065 20:19:16 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:03.065 20:19:16 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:03.065 20:19:16 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:03.065 20:19:16 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:03.065 20:19:16 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:03.065 20:19:16 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:03.065 20:19:16 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:03.065 20:19:16 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:03.065 20:19:16 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:03.065 20:19:16 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:03.065 20:19:16 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:03.065 20:19:16 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:03.065 20:19:16 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:03.065 20:19:16 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:03.065 20:19:16 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:03.065 20:19:16 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:03.065 20:19:16 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:03.065 20:19:16 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:03.065 20:19:16 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:03.065 20:19:16 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:03.065 20:19:16 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:03.324 20:19:16 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:03.324 20:19:16 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:03.324 20:19:16 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:03.324 20:19:16 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:03.324 20:19:16 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:03.324 20:19:16 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:03.324 20:19:16 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:03.324 20:19:16 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:03.584 20:19:16 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:03.584 20:19:16 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:03.584 20:19:16 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:03.584 20:19:16 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:03.584 20:19:16 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:03.584 20:19:16 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:03.584 20:19:16 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:03.584 20:19:16 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:03.584 20:19:16 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:03.584 20:19:16 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:03.584 20:19:16 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:03.584 20:19:16 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:03.584 20:19:16 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:03.584 20:19:16 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:03.584 20:19:16 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:03.584 20:19:16 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:03.584 20:19:16 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:03.584 20:19:16 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:03.584 20:19:16 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:03.584 20:19:16 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:03.584 20:19:16 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:03.584 20:19:16 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:03.584 20:19:16 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:03.584 20:19:16 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:03.584 20:19:16 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:03.584 20:19:16 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:03.584 20:19:16 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:03.584 20:19:16 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:03.584 20:19:16 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:03.584 20:19:16 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:03.843 20:19:16 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:03.843 20:19:16 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:03.843 20:19:16 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:03.843 20:19:16 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:03.843 20:19:16 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:03.843 20:19:16 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:03.843 20:19:16 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:03.843 20:19:16 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:03.843 20:19:16 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:03.843 20:19:16 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:03.843 20:19:16 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:03.843 20:19:16 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:03.843 20:19:16 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:03.843 20:19:16 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:03.843 20:19:16 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:03.843 20:19:16 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:03.843 20:19:16 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:03.843 20:19:16 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:03.843 20:19:16 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:03.843 20:19:16 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:03.843 20:19:16 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:03.843 20:19:16 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:03.843 20:19:16 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:03.843 20:19:16 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:03.843 20:19:16 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:03.843 20:19:16 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:04.102 20:19:16 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:04.102 20:19:16 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:04.102 20:19:16 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:04.102 20:19:16 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:04.102 20:19:16 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:04.102 20:19:16 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:04.102 20:19:16 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:04.102 20:19:16 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:04.360 20:19:17 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:04.360 20:19:17 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:04.360 20:19:17 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:04.360 20:19:17 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:04.360 20:19:17 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:04.360 20:19:17 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:04.360 20:19:17 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:04.360 20:19:17 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:04.360 20:19:17 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:04.360 20:19:17 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:04.360 20:19:17 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:04.360 20:19:17 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:04.360 20:19:17 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:04.360 20:19:17 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:04.360 20:19:17 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:04.360 20:19:17 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:04.360 20:19:17 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:04.360 20:19:17 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:04.360 20:19:17 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:04.360 20:19:17 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:04.360 20:19:17 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:04.360 20:19:17 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:04.360 20:19:17 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:04.360 20:19:17 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:04.360 20:19:17 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:04.360 20:19:17 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:04.360 20:19:17 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:04.360 20:19:17 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:04.360 20:19:17 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:04.360 20:19:17 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:04.360 20:19:17 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:04.360 20:19:17 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:04.619 20:19:17 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:04.619 20:19:17 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:04.619 20:19:17 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:04.619 20:19:17 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:04.619 20:19:17 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:04.619 20:19:17 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:04.619 20:19:17 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:04.619 20:19:17 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:04.619 20:19:17 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:04.619 20:19:17 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:04.619 20:19:17 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:04.619 20:19:17 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:04.619 20:19:17 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:04.619 20:19:17 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:04.619 20:19:17 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:04.619 20:19:17 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:04.619 20:19:17 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:04.619 20:19:17 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:04.619 20:19:17 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:04.619 20:19:17 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:04.619 20:19:17 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:04.619 20:19:17 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:04.619 20:19:17 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:04.619 20:19:17 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:04.878 20:19:17 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:04.878 20:19:17 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:04.878 20:19:17 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:04.878 20:19:17 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:04.878 20:19:17 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:04.878 20:19:17 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:04.878 20:19:17 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:04.878 20:19:17 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:04.878 20:19:17 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:04.878 20:19:17 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:04.878 20:19:17 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:04.878 20:19:17 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:04.878 20:19:17 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:04.878 20:19:17 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:04.878 20:19:17 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:04.878 20:19:17 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:04.878 20:19:17 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:05.137 20:19:17 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:05.137 20:19:17 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:05.137 20:19:17 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:05.137 20:19:17 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:05.137 20:19:17 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:05.137 20:19:17 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:05.137 20:19:17 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:05.137 20:19:17 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:05.137 20:19:17 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:05.137 20:19:17 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:05.137 20:19:17 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:05.137 20:19:17 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:05.137 20:19:17 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:05.137 20:19:17 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:05.137 20:19:17 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:05.137 20:19:18 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:05.137 20:19:18 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:05.137 20:19:18 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:05.137 20:19:18 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:05.137 20:19:18 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:05.137 20:19:18 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:05.137 20:19:18 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:05.137 20:19:18 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:05.396 20:19:18 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:05.396 20:19:18 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:05.396 20:19:18 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:05.396 20:19:18 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:05.396 20:19:18 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:05.396 20:19:18 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:05.396 20:19:18 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:05.396 20:19:18 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:05.396 20:19:18 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:05.396 20:19:18 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:05.396 20:19:18 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:05.396 20:19:18 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:05.396 20:19:18 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:05.396 20:19:18 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:05.396 20:19:18 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:05.396 20:19:18 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:05.396 20:19:18 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:10:05.396 20:19:18 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:10:05.396 20:19:18 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:05.396 20:19:18 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:10:05.396 20:19:18 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:10:05.396 20:19:18 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:10:05.396 20:19:18 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:10:05.396 20:19:18 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:05.396 20:19:18 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:10:05.396 rmmod nvme_rdma 00:10:05.396 rmmod nvme_fabrics 00:10:05.396 20:19:18 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:05.396 20:19:18 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:10:05.396 20:19:18 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:10:05.396 20:19:18 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 2948553 ']' 00:10:05.396 20:19:18 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 2948553 00:10:05.396 20:19:18 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@946 -- # '[' -z 2948553 ']' 00:10:05.396 20:19:18 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@950 -- # kill -0 2948553 00:10:05.396 20:19:18 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@951 -- # uname 00:10:05.397 20:19:18 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:10:05.397 20:19:18 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2948553 00:10:05.397 20:19:18 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:10:05.397 20:19:18 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:10:05.397 20:19:18 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2948553' 00:10:05.397 killing process with pid 2948553 00:10:05.397 20:19:18 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@965 -- # kill 2948553 00:10:05.397 [2024-05-16 20:19:18.360778] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:10:05.397 20:19:18 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@970 -- # wait 2948553 00:10:05.655 [2024-05-16 20:19:18.428417] rdma.c:2885:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:10:05.655 20:19:18 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:05.655 20:19:18 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:10:05.655 00:10:05.655 real 0m47.853s 00:10:05.655 user 3m19.854s 00:10:05.655 sys 0m11.765s 00:10:05.655 20:19:18 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:05.655 20:19:18 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:10:05.655 ************************************ 00:10:05.655 END TEST nvmf_ns_hotplug_stress 00:10:05.655 ************************************ 00:10:05.655 20:19:18 nvmf_rdma -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=rdma 00:10:05.655 20:19:18 nvmf_rdma -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:10:05.655 20:19:18 nvmf_rdma -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:05.655 20:19:18 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:10:05.914 ************************************ 00:10:05.914 START TEST nvmf_connect_stress 00:10:05.914 ************************************ 00:10:05.914 20:19:18 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=rdma 00:10:05.914 * Looking for test storage... 00:10:05.914 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:10:05.914 20:19:18 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:10:05.914 20:19:18 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:10:05.914 20:19:18 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:05.914 20:19:18 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:05.914 20:19:18 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:05.914 20:19:18 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:05.914 20:19:18 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:05.914 20:19:18 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:05.914 20:19:18 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:05.914 20:19:18 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:05.914 20:19:18 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:05.914 20:19:18 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:05.914 20:19:18 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:10:05.914 20:19:18 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:10:05.914 20:19:18 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:05.914 20:19:18 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:05.914 20:19:18 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:05.914 20:19:18 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:05.914 20:19:18 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:10:05.914 20:19:18 nvmf_rdma.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:05.914 20:19:18 nvmf_rdma.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:05.914 20:19:18 nvmf_rdma.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:05.914 20:19:18 nvmf_rdma.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:05.914 20:19:18 nvmf_rdma.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:05.914 20:19:18 nvmf_rdma.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:05.914 20:19:18 nvmf_rdma.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:10:05.914 20:19:18 nvmf_rdma.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:05.914 20:19:18 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:10:05.914 20:19:18 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:05.914 20:19:18 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:05.914 20:19:18 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:05.914 20:19:18 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:05.914 20:19:18 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:05.914 20:19:18 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:05.914 20:19:18 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:05.914 20:19:18 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:05.914 20:19:18 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:10:05.914 20:19:18 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:10:05.915 20:19:18 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:05.915 20:19:18 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:05.915 20:19:18 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:05.915 20:19:18 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:05.915 20:19:18 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:05.915 20:19:18 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:05.915 20:19:18 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:05.915 20:19:18 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:05.915 20:19:18 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:05.915 20:19:18 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:10:05.915 20:19:18 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:12.482 20:19:24 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:12.482 20:19:24 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:10:12.482 20:19:24 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:12.482 20:19:24 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:12.482 20:19:24 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:12.482 20:19:24 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:12.482 20:19:24 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:12.482 20:19:24 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@295 -- # net_devs=() 00:10:12.482 20:19:24 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:12.482 20:19:24 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@296 -- # e810=() 00:10:12.482 20:19:24 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@296 -- # local -ga e810 00:10:12.482 20:19:24 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@297 -- # x722=() 00:10:12.482 20:19:24 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@297 -- # local -ga x722 00:10:12.482 20:19:24 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@298 -- # mlx=() 00:10:12.482 20:19:24 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:10:12.482 20:19:24 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:12.482 20:19:24 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:12.482 20:19:24 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:12.482 20:19:24 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:12.482 20:19:24 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:12.482 20:19:24 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:12.482 20:19:24 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:12.482 20:19:24 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:12.482 20:19:24 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:12.482 20:19:24 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:12.482 20:19:24 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:12.482 20:19:24 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:12.482 20:19:24 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:10:12.482 20:19:24 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:10:12.482 20:19:24 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:10:12.482 20:19:24 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:10:12.482 20:19:24 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:10:12.482 20:19:24 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:12.482 20:19:24 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:12.482 20:19:24 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:10:12.482 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:10:12.482 20:19:24 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:10:12.482 20:19:24 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:10:12.482 20:19:24 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:10:12.483 20:19:24 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:10:12.483 20:19:24 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:10:12.483 20:19:24 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:10:12.483 20:19:24 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:12.483 20:19:24 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:10:12.483 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:10:12.483 20:19:24 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:10:12.483 20:19:24 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:10:12.483 20:19:24 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:10:12.483 20:19:24 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:10:12.483 20:19:24 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:10:12.483 20:19:24 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:10:12.483 20:19:24 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:12.483 20:19:24 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:10:12.483 20:19:24 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:12.483 20:19:24 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:12.483 20:19:24 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:10:12.483 20:19:24 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:12.483 20:19:24 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:12.483 20:19:24 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:10:12.483 Found net devices under 0000:da:00.0: mlx_0_0 00:10:12.483 20:19:24 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:12.483 20:19:24 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:12.483 20:19:24 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:12.483 20:19:24 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:10:12.483 20:19:24 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:12.483 20:19:24 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:12.483 20:19:24 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:10:12.483 Found net devices under 0000:da:00.1: mlx_0_1 00:10:12.483 20:19:24 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:12.483 20:19:24 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:12.483 20:19:24 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:10:12.483 20:19:24 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:12.483 20:19:24 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:10:12.483 20:19:24 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:10:12.483 20:19:24 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@420 -- # rdma_device_init 00:10:12.483 20:19:24 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:10:12.483 20:19:24 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@58 -- # uname 00:10:12.483 20:19:24 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:10:12.483 20:19:24 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@62 -- # modprobe ib_cm 00:10:12.483 20:19:24 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@63 -- # modprobe ib_core 00:10:12.483 20:19:24 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@64 -- # modprobe ib_umad 00:10:12.483 20:19:24 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:10:12.483 20:19:24 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@66 -- # modprobe iw_cm 00:10:12.483 20:19:24 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:10:12.483 20:19:24 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:10:12.483 20:19:24 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@502 -- # allocate_nic_ips 00:10:12.483 20:19:24 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:10:12.483 20:19:24 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@73 -- # get_rdma_if_list 00:10:12.483 20:19:24 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:12.483 20:19:24 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:10:12.483 20:19:24 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:10:12.483 20:19:24 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:12.483 20:19:24 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:10:12.483 20:19:24 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:12.483 20:19:24 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:12.483 20:19:24 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:12.483 20:19:24 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@104 -- # echo mlx_0_0 00:10:12.483 20:19:24 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@105 -- # continue 2 00:10:12.483 20:19:24 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:12.483 20:19:24 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:12.483 20:19:24 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:12.483 20:19:24 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:12.483 20:19:24 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:12.483 20:19:24 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@104 -- # echo mlx_0_1 00:10:12.483 20:19:24 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@105 -- # continue 2 00:10:12.483 20:19:24 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:10:12.483 20:19:24 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:10:12.483 20:19:24 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:10:12.483 20:19:24 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:10:12.483 20:19:24 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:12.483 20:19:24 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:12.483 20:19:24 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:10:12.483 20:19:24 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:10:12.483 20:19:24 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:10:12.483 258: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:12.483 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:10:12.483 altname enp218s0f0np0 00:10:12.483 altname ens818f0np0 00:10:12.483 inet 192.168.100.8/24 scope global mlx_0_0 00:10:12.483 valid_lft forever preferred_lft forever 00:10:12.483 20:19:24 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:10:12.483 20:19:24 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:10:12.483 20:19:24 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:10:12.483 20:19:24 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:10:12.483 20:19:24 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:12.483 20:19:24 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:12.483 20:19:24 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:10:12.483 20:19:24 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:10:12.483 20:19:24 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:10:12.483 259: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:12.483 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:10:12.483 altname enp218s0f1np1 00:10:12.484 altname ens818f1np1 00:10:12.484 inet 192.168.100.9/24 scope global mlx_0_1 00:10:12.484 valid_lft forever preferred_lft forever 00:10:12.484 20:19:24 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@422 -- # return 0 00:10:12.484 20:19:24 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:12.484 20:19:24 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:10:12.484 20:19:24 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:10:12.484 20:19:24 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:10:12.484 20:19:24 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@86 -- # get_rdma_if_list 00:10:12.484 20:19:24 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:12.484 20:19:24 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:10:12.484 20:19:24 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:10:12.484 20:19:24 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:12.484 20:19:24 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:10:12.484 20:19:24 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:12.484 20:19:24 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:12.484 20:19:24 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:12.484 20:19:24 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@104 -- # echo mlx_0_0 00:10:12.484 20:19:24 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@105 -- # continue 2 00:10:12.484 20:19:24 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:12.484 20:19:24 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:12.484 20:19:24 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:12.484 20:19:24 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:12.484 20:19:24 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:12.484 20:19:24 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@104 -- # echo mlx_0_1 00:10:12.484 20:19:24 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@105 -- # continue 2 00:10:12.484 20:19:24 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:10:12.484 20:19:24 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:10:12.484 20:19:24 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:10:12.484 20:19:24 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:10:12.484 20:19:24 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:12.484 20:19:24 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:12.484 20:19:24 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:10:12.484 20:19:24 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:10:12.484 20:19:24 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:10:12.484 20:19:24 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:10:12.484 20:19:24 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:12.484 20:19:24 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:12.484 20:19:24 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:10:12.484 192.168.100.9' 00:10:12.484 20:19:24 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:10:12.484 192.168.100.9' 00:10:12.484 20:19:24 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@457 -- # head -n 1 00:10:12.484 20:19:24 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:10:12.484 20:19:24 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:10:12.484 192.168.100.9' 00:10:12.484 20:19:24 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@458 -- # tail -n +2 00:10:12.484 20:19:24 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@458 -- # head -n 1 00:10:12.484 20:19:24 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:10:12.484 20:19:24 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:10:12.484 20:19:24 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:10:12.484 20:19:24 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:10:12.484 20:19:24 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:10:12.484 20:19:24 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:10:12.484 20:19:24 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:10:12.484 20:19:24 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:12.484 20:19:24 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@720 -- # xtrace_disable 00:10:12.484 20:19:24 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:12.484 20:19:24 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=2959205 00:10:12.484 20:19:24 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 2959205 00:10:12.484 20:19:24 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:10:12.484 20:19:24 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@827 -- # '[' -z 2959205 ']' 00:10:12.484 20:19:24 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:12.484 20:19:24 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@832 -- # local max_retries=100 00:10:12.484 20:19:24 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:12.484 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:12.484 20:19:24 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@836 -- # xtrace_disable 00:10:12.484 20:19:24 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:12.484 [2024-05-16 20:19:24.559757] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:10:12.484 [2024-05-16 20:19:24.559810] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:12.484 EAL: No free 2048 kB hugepages reported on node 1 00:10:12.484 [2024-05-16 20:19:24.623378] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:12.484 [2024-05-16 20:19:24.697283] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:12.484 [2024-05-16 20:19:24.697316] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:12.484 [2024-05-16 20:19:24.697323] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:12.484 [2024-05-16 20:19:24.697329] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:12.484 [2024-05-16 20:19:24.697334] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:12.484 [2024-05-16 20:19:24.697449] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:12.484 [2024-05-16 20:19:24.697537] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:12.484 [2024-05-16 20:19:24.697539] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:12.484 20:19:25 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:10:12.484 20:19:25 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@860 -- # return 0 00:10:12.484 20:19:25 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:12.484 20:19:25 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:12.484 20:19:25 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:12.484 20:19:25 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:12.484 20:19:25 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:10:12.484 20:19:25 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:12.484 20:19:25 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:12.484 [2024-05-16 20:19:25.426147] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1fc2110/0x1fc6600) succeed. 00:10:12.484 [2024-05-16 20:19:25.436437] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1fc36b0/0x2007c90) succeed. 00:10:12.744 20:19:25 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:12.744 20:19:25 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:12.744 20:19:25 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:12.744 20:19:25 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:12.744 20:19:25 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:12.744 20:19:25 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:10:12.744 20:19:25 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:12.744 20:19:25 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:12.744 [2024-05-16 20:19:25.546530] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:10:12.744 [2024-05-16 20:19:25.546882] rdma.c:3032:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:10:12.744 20:19:25 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:12.744 20:19:25 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:10:12.744 20:19:25 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:12.744 20:19:25 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:12.744 NULL1 00:10:12.744 20:19:25 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:12.744 20:19:25 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=2959453 00:10:12.744 20:19:25 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:10:12.744 20:19:25 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:10:12.744 20:19:25 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:10:12.744 20:19:25 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:10:12.744 20:19:25 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:12.744 20:19:25 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:12.744 20:19:25 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:12.744 20:19:25 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:12.744 20:19:25 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:12.744 20:19:25 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:12.744 20:19:25 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:12.744 20:19:25 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:12.744 20:19:25 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:12.744 20:19:25 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:12.744 20:19:25 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:12.744 20:19:25 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:12.744 EAL: No free 2048 kB hugepages reported on node 1 00:10:12.744 20:19:25 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:12.744 20:19:25 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:12.744 20:19:25 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:12.744 20:19:25 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:12.744 20:19:25 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:12.744 20:19:25 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:12.744 20:19:25 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:12.744 20:19:25 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:12.744 20:19:25 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:12.744 20:19:25 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:12.744 20:19:25 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:12.744 20:19:25 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:12.744 20:19:25 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:12.744 20:19:25 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:12.745 20:19:25 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:12.745 20:19:25 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:12.745 20:19:25 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:12.745 20:19:25 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:12.745 20:19:25 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:12.745 20:19:25 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:12.745 20:19:25 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:12.745 20:19:25 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:12.745 20:19:25 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:12.745 20:19:25 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:12.745 20:19:25 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:12.745 20:19:25 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:12.745 20:19:25 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:12.745 20:19:25 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:12.745 20:19:25 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2959453 00:10:12.745 20:19:25 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:12.745 20:19:25 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:12.745 20:19:25 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:13.003 20:19:25 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:13.003 20:19:25 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2959453 00:10:13.003 20:19:25 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:13.003 20:19:25 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:13.003 20:19:25 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:13.569 20:19:26 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:13.569 20:19:26 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2959453 00:10:13.569 20:19:26 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:13.569 20:19:26 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:13.569 20:19:26 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:13.828 20:19:26 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:13.828 20:19:26 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2959453 00:10:13.828 20:19:26 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:13.828 20:19:26 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:13.828 20:19:26 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:14.088 20:19:26 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:14.088 20:19:26 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2959453 00:10:14.088 20:19:26 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:14.088 20:19:26 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:14.088 20:19:26 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:14.346 20:19:27 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:14.346 20:19:27 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2959453 00:10:14.346 20:19:27 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:14.346 20:19:27 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:14.346 20:19:27 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:14.914 20:19:27 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:14.914 20:19:27 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2959453 00:10:14.914 20:19:27 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:14.914 20:19:27 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:14.914 20:19:27 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:15.173 20:19:27 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:15.173 20:19:27 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2959453 00:10:15.173 20:19:27 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:15.173 20:19:27 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:15.173 20:19:27 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:15.444 20:19:28 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:15.444 20:19:28 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2959453 00:10:15.444 20:19:28 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:15.444 20:19:28 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:15.445 20:19:28 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:15.709 20:19:28 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:15.709 20:19:28 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2959453 00:10:15.709 20:19:28 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:15.710 20:19:28 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:15.710 20:19:28 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:15.968 20:19:28 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:15.968 20:19:28 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2959453 00:10:15.968 20:19:28 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:15.968 20:19:28 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:15.968 20:19:28 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:16.535 20:19:29 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:16.535 20:19:29 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2959453 00:10:16.535 20:19:29 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:16.535 20:19:29 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:16.535 20:19:29 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:16.793 20:19:29 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:16.793 20:19:29 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2959453 00:10:16.793 20:19:29 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:16.793 20:19:29 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:16.793 20:19:29 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:17.052 20:19:29 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:17.052 20:19:29 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2959453 00:10:17.052 20:19:29 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:17.052 20:19:29 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:17.052 20:19:29 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:17.310 20:19:30 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:17.310 20:19:30 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2959453 00:10:17.310 20:19:30 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:17.310 20:19:30 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:17.310 20:19:30 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:17.568 20:19:30 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:17.568 20:19:30 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2959453 00:10:17.568 20:19:30 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:17.568 20:19:30 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:17.568 20:19:30 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:18.134 20:19:30 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:18.134 20:19:30 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2959453 00:10:18.134 20:19:30 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:18.134 20:19:30 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:18.134 20:19:30 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:18.391 20:19:31 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:18.392 20:19:31 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2959453 00:10:18.392 20:19:31 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:18.392 20:19:31 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:18.392 20:19:31 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:18.650 20:19:31 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:18.650 20:19:31 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2959453 00:10:18.650 20:19:31 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:18.650 20:19:31 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:18.650 20:19:31 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:18.908 20:19:31 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:18.908 20:19:31 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2959453 00:10:18.908 20:19:31 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:18.908 20:19:31 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:18.908 20:19:31 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:19.475 20:19:32 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:19.475 20:19:32 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2959453 00:10:19.475 20:19:32 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:19.475 20:19:32 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:19.475 20:19:32 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:19.733 20:19:32 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:19.733 20:19:32 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2959453 00:10:19.733 20:19:32 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:19.733 20:19:32 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:19.733 20:19:32 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:19.999 20:19:32 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:19.999 20:19:32 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2959453 00:10:19.999 20:19:32 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:20.000 20:19:32 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:20.000 20:19:32 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:20.264 20:19:33 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:20.264 20:19:33 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2959453 00:10:20.264 20:19:33 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:20.264 20:19:33 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:20.264 20:19:33 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:20.522 20:19:33 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:20.522 20:19:33 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2959453 00:10:20.522 20:19:33 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:20.522 20:19:33 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:20.522 20:19:33 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:21.126 20:19:33 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:21.126 20:19:33 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2959453 00:10:21.126 20:19:33 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:21.126 20:19:33 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:21.126 20:19:33 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:21.411 20:19:34 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:21.411 20:19:34 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2959453 00:10:21.411 20:19:34 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:21.411 20:19:34 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:21.411 20:19:34 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:21.668 20:19:34 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:21.668 20:19:34 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2959453 00:10:21.668 20:19:34 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:21.668 20:19:34 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:21.668 20:19:34 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:21.927 20:19:34 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:21.927 20:19:34 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2959453 00:10:21.927 20:19:34 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:21.927 20:19:34 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:21.927 20:19:34 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:22.185 20:19:35 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:22.185 20:19:35 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2959453 00:10:22.185 20:19:35 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:22.185 20:19:35 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:22.185 20:19:35 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:22.752 20:19:35 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:22.752 20:19:35 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2959453 00:10:22.752 20:19:35 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:22.752 20:19:35 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:22.752 20:19:35 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:22.752 Testing NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:10:23.011 20:19:35 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:23.011 20:19:35 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2959453 00:10:23.012 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (2959453) - No such process 00:10:23.012 20:19:35 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 2959453 00:10:23.012 20:19:35 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:10:23.012 20:19:35 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:10:23.012 20:19:35 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:10:23.012 20:19:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:23.012 20:19:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:10:23.012 20:19:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:10:23.012 20:19:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:10:23.012 20:19:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:10:23.012 20:19:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:23.012 20:19:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:10:23.012 rmmod nvme_rdma 00:10:23.012 rmmod nvme_fabrics 00:10:23.012 20:19:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:23.012 20:19:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:10:23.012 20:19:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:10:23.012 20:19:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 2959205 ']' 00:10:23.012 20:19:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 2959205 00:10:23.012 20:19:35 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@946 -- # '[' -z 2959205 ']' 00:10:23.012 20:19:35 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@950 -- # kill -0 2959205 00:10:23.012 20:19:35 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@951 -- # uname 00:10:23.012 20:19:35 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:10:23.012 20:19:35 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2959205 00:10:23.012 20:19:35 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:10:23.012 20:19:35 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:10:23.012 20:19:35 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2959205' 00:10:23.012 killing process with pid 2959205 00:10:23.012 20:19:35 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@965 -- # kill 2959205 00:10:23.012 [2024-05-16 20:19:35.893920] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:10:23.012 20:19:35 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@970 -- # wait 2959205 00:10:23.012 [2024-05-16 20:19:35.960546] rdma.c:2885:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:10:23.271 20:19:36 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:23.271 20:19:36 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:10:23.271 00:10:23.271 real 0m17.460s 00:10:23.271 user 0m41.996s 00:10:23.271 sys 0m6.082s 00:10:23.271 20:19:36 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:23.271 20:19:36 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:23.271 ************************************ 00:10:23.271 END TEST nvmf_connect_stress 00:10:23.271 ************************************ 00:10:23.272 20:19:36 nvmf_rdma -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=rdma 00:10:23.272 20:19:36 nvmf_rdma -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:10:23.272 20:19:36 nvmf_rdma -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:23.272 20:19:36 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:10:23.272 ************************************ 00:10:23.272 START TEST nvmf_fused_ordering 00:10:23.272 ************************************ 00:10:23.272 20:19:36 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=rdma 00:10:23.531 * Looking for test storage... 00:10:23.531 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:10:23.531 20:19:36 nvmf_rdma.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:10:23.531 20:19:36 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:10:23.531 20:19:36 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:23.531 20:19:36 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:23.531 20:19:36 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:23.531 20:19:36 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:23.531 20:19:36 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:23.531 20:19:36 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:23.531 20:19:36 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:23.531 20:19:36 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:23.531 20:19:36 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:23.531 20:19:36 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:23.531 20:19:36 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:10:23.531 20:19:36 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:10:23.531 20:19:36 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:23.531 20:19:36 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:23.531 20:19:36 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:23.531 20:19:36 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:23.531 20:19:36 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:10:23.531 20:19:36 nvmf_rdma.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:23.531 20:19:36 nvmf_rdma.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:23.531 20:19:36 nvmf_rdma.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:23.531 20:19:36 nvmf_rdma.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:23.531 20:19:36 nvmf_rdma.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:23.531 20:19:36 nvmf_rdma.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:23.531 20:19:36 nvmf_rdma.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:10:23.531 20:19:36 nvmf_rdma.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:23.531 20:19:36 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:10:23.531 20:19:36 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:23.531 20:19:36 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:23.531 20:19:36 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:23.531 20:19:36 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:23.531 20:19:36 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:23.531 20:19:36 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:23.531 20:19:36 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:23.531 20:19:36 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:23.531 20:19:36 nvmf_rdma.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:10:23.531 20:19:36 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:10:23.531 20:19:36 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:23.531 20:19:36 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:23.531 20:19:36 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:23.531 20:19:36 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:23.531 20:19:36 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:23.531 20:19:36 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:23.531 20:19:36 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:23.531 20:19:36 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:23.531 20:19:36 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:23.531 20:19:36 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@285 -- # xtrace_disable 00:10:23.531 20:19:36 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:30.099 20:19:42 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:30.099 20:19:42 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@291 -- # pci_devs=() 00:10:30.099 20:19:42 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:30.099 20:19:42 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:30.099 20:19:42 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:30.099 20:19:42 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:30.099 20:19:42 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:30.099 20:19:42 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@295 -- # net_devs=() 00:10:30.099 20:19:42 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:30.099 20:19:42 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@296 -- # e810=() 00:10:30.099 20:19:42 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@296 -- # local -ga e810 00:10:30.099 20:19:42 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@297 -- # x722=() 00:10:30.099 20:19:42 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@297 -- # local -ga x722 00:10:30.099 20:19:42 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@298 -- # mlx=() 00:10:30.099 20:19:42 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@298 -- # local -ga mlx 00:10:30.099 20:19:42 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:30.099 20:19:42 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:30.099 20:19:42 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:30.099 20:19:42 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:30.099 20:19:42 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:30.099 20:19:42 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:30.099 20:19:42 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:30.099 20:19:42 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:30.099 20:19:42 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:30.099 20:19:42 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:30.099 20:19:42 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:30.099 20:19:42 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:30.099 20:19:42 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:10:30.099 20:19:42 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:10:30.099 20:19:42 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:10:30.099 20:19:42 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:10:30.099 20:19:42 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:10:30.099 20:19:42 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:30.099 20:19:42 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:30.099 20:19:42 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:10:30.099 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:10:30.099 20:19:42 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:10:30.099 20:19:42 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:10:30.099 20:19:42 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:10:30.099 20:19:42 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:10:30.099 20:19:42 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:10:30.099 20:19:42 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:10:30.099 20:19:42 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:30.099 20:19:42 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:10:30.099 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:10:30.100 20:19:42 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:10:30.100 20:19:42 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:10:30.100 20:19:42 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:10:30.100 20:19:42 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:10:30.100 20:19:42 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:10:30.100 20:19:42 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:10:30.100 20:19:42 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:30.100 20:19:42 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:10:30.100 20:19:42 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:30.100 20:19:42 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:30.100 20:19:42 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:10:30.100 20:19:42 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:30.100 20:19:42 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:30.100 20:19:42 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:10:30.100 Found net devices under 0000:da:00.0: mlx_0_0 00:10:30.100 20:19:42 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:30.100 20:19:42 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:30.100 20:19:42 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:30.100 20:19:42 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:10:30.100 20:19:42 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:30.100 20:19:42 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:30.100 20:19:42 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:10:30.100 Found net devices under 0000:da:00.1: mlx_0_1 00:10:30.100 20:19:42 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:30.100 20:19:42 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:30.100 20:19:42 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@414 -- # is_hw=yes 00:10:30.100 20:19:42 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:30.100 20:19:42 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:10:30.100 20:19:42 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:10:30.100 20:19:42 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@420 -- # rdma_device_init 00:10:30.100 20:19:42 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:10:30.100 20:19:42 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@58 -- # uname 00:10:30.100 20:19:42 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:10:30.100 20:19:42 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@62 -- # modprobe ib_cm 00:10:30.100 20:19:42 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@63 -- # modprobe ib_core 00:10:30.100 20:19:42 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@64 -- # modprobe ib_umad 00:10:30.100 20:19:42 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:10:30.100 20:19:42 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@66 -- # modprobe iw_cm 00:10:30.100 20:19:42 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:10:30.100 20:19:42 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:10:30.100 20:19:42 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@502 -- # allocate_nic_ips 00:10:30.100 20:19:42 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:10:30.100 20:19:42 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@73 -- # get_rdma_if_list 00:10:30.100 20:19:42 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:30.100 20:19:42 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:10:30.100 20:19:42 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:10:30.100 20:19:42 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:30.100 20:19:42 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:10:30.100 20:19:42 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:30.100 20:19:42 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:30.100 20:19:42 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:30.100 20:19:42 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@104 -- # echo mlx_0_0 00:10:30.100 20:19:42 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@105 -- # continue 2 00:10:30.100 20:19:42 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:30.100 20:19:42 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:30.100 20:19:42 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:30.100 20:19:42 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:30.100 20:19:42 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:30.100 20:19:42 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@104 -- # echo mlx_0_1 00:10:30.100 20:19:42 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@105 -- # continue 2 00:10:30.100 20:19:42 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:10:30.100 20:19:42 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:10:30.100 20:19:42 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:10:30.100 20:19:42 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:10:30.100 20:19:42 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:30.100 20:19:42 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:30.100 20:19:42 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:10:30.100 20:19:42 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:10:30.100 20:19:42 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:10:30.100 258: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:30.100 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:10:30.100 altname enp218s0f0np0 00:10:30.100 altname ens818f0np0 00:10:30.100 inet 192.168.100.8/24 scope global mlx_0_0 00:10:30.100 valid_lft forever preferred_lft forever 00:10:30.100 20:19:42 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:10:30.100 20:19:42 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:10:30.100 20:19:42 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:10:30.100 20:19:42 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:10:30.100 20:19:42 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:30.100 20:19:42 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:30.100 20:19:42 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:10:30.100 20:19:42 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:10:30.100 20:19:42 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:10:30.100 259: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:30.100 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:10:30.100 altname enp218s0f1np1 00:10:30.100 altname ens818f1np1 00:10:30.100 inet 192.168.100.9/24 scope global mlx_0_1 00:10:30.100 valid_lft forever preferred_lft forever 00:10:30.100 20:19:42 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@422 -- # return 0 00:10:30.100 20:19:42 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:30.100 20:19:42 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:10:30.100 20:19:42 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:10:30.100 20:19:42 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:10:30.100 20:19:42 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@86 -- # get_rdma_if_list 00:10:30.100 20:19:42 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:30.100 20:19:42 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:10:30.100 20:19:42 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:10:30.100 20:19:42 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:30.100 20:19:42 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:10:30.100 20:19:42 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:30.100 20:19:42 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:30.100 20:19:42 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:30.100 20:19:42 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@104 -- # echo mlx_0_0 00:10:30.100 20:19:42 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@105 -- # continue 2 00:10:30.100 20:19:42 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:30.100 20:19:42 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:30.100 20:19:42 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:30.100 20:19:42 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:30.100 20:19:42 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:30.100 20:19:42 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@104 -- # echo mlx_0_1 00:10:30.100 20:19:42 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@105 -- # continue 2 00:10:30.100 20:19:42 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:10:30.101 20:19:42 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:10:30.101 20:19:42 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:10:30.101 20:19:42 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:10:30.101 20:19:42 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:30.101 20:19:42 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:30.101 20:19:42 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:10:30.101 20:19:42 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:10:30.101 20:19:42 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:10:30.101 20:19:42 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:10:30.101 20:19:42 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:30.101 20:19:42 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:30.101 20:19:42 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:10:30.101 192.168.100.9' 00:10:30.101 20:19:42 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:10:30.101 192.168.100.9' 00:10:30.101 20:19:42 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@457 -- # head -n 1 00:10:30.101 20:19:42 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:10:30.101 20:19:42 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:10:30.101 192.168.100.9' 00:10:30.101 20:19:42 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@458 -- # tail -n +2 00:10:30.101 20:19:42 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@458 -- # head -n 1 00:10:30.101 20:19:42 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:10:30.101 20:19:42 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:10:30.101 20:19:42 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:10:30.101 20:19:42 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:10:30.101 20:19:42 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:10:30.101 20:19:42 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:10:30.101 20:19:42 nvmf_rdma.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:10:30.101 20:19:42 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:30.101 20:19:42 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@720 -- # xtrace_disable 00:10:30.101 20:19:42 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:30.101 20:19:42 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=2964678 00:10:30.101 20:19:42 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 2964678 00:10:30.101 20:19:42 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:10:30.101 20:19:42 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@827 -- # '[' -z 2964678 ']' 00:10:30.101 20:19:42 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:30.101 20:19:42 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@832 -- # local max_retries=100 00:10:30.101 20:19:42 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:30.101 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:30.101 20:19:42 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # xtrace_disable 00:10:30.101 20:19:42 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:30.101 [2024-05-16 20:19:42.815883] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:10:30.101 [2024-05-16 20:19:42.815929] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:30.101 EAL: No free 2048 kB hugepages reported on node 1 00:10:30.101 [2024-05-16 20:19:42.876936] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:30.101 [2024-05-16 20:19:42.950626] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:30.101 [2024-05-16 20:19:42.950678] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:30.101 [2024-05-16 20:19:42.950686] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:30.101 [2024-05-16 20:19:42.950691] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:30.101 [2024-05-16 20:19:42.950696] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:30.101 [2024-05-16 20:19:42.950731] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:30.667 20:19:43 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:10:30.667 20:19:43 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@860 -- # return 0 00:10:30.667 20:19:43 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:30.667 20:19:43 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:30.667 20:19:43 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:30.667 20:19:43 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:30.667 20:19:43 nvmf_rdma.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:10:30.667 20:19:43 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:30.667 20:19:43 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:30.945 [2024-05-16 20:19:43.670059] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xb49ae0/0xb4dfd0) succeed. 00:10:30.945 [2024-05-16 20:19:43.678536] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xb4afe0/0xb8f660) succeed. 00:10:30.945 20:19:43 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:30.945 20:19:43 nvmf_rdma.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:30.945 20:19:43 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:30.945 20:19:43 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:30.945 20:19:43 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:30.945 20:19:43 nvmf_rdma.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:10:30.945 20:19:43 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:30.945 20:19:43 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:30.945 [2024-05-16 20:19:43.741305] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:10:30.945 [2024-05-16 20:19:43.741648] rdma.c:3032:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:10:30.945 20:19:43 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:30.945 20:19:43 nvmf_rdma.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:10:30.945 20:19:43 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:30.945 20:19:43 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:30.945 NULL1 00:10:30.945 20:19:43 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:30.945 20:19:43 nvmf_rdma.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:10:30.945 20:19:43 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:30.945 20:19:43 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:30.945 20:19:43 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:30.945 20:19:43 nvmf_rdma.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:10:30.945 20:19:43 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:30.945 20:19:43 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:30.945 20:19:43 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:30.945 20:19:43 nvmf_rdma.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:10:30.945 [2024-05-16 20:19:43.793212] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:10:30.945 [2024-05-16 20:19:43.793246] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2964913 ] 00:10:30.945 EAL: No free 2048 kB hugepages reported on node 1 00:10:31.203 Attached to nqn.2016-06.io.spdk:cnode1 00:10:31.203 Namespace ID: 1 size: 1GB 00:10:31.203 fused_ordering(0) 00:10:31.203 fused_ordering(1) 00:10:31.203 fused_ordering(2) 00:10:31.203 fused_ordering(3) 00:10:31.203 fused_ordering(4) 00:10:31.203 fused_ordering(5) 00:10:31.203 fused_ordering(6) 00:10:31.203 fused_ordering(7) 00:10:31.203 fused_ordering(8) 00:10:31.203 fused_ordering(9) 00:10:31.203 fused_ordering(10) 00:10:31.203 fused_ordering(11) 00:10:31.203 fused_ordering(12) 00:10:31.203 fused_ordering(13) 00:10:31.203 fused_ordering(14) 00:10:31.203 fused_ordering(15) 00:10:31.203 fused_ordering(16) 00:10:31.203 fused_ordering(17) 00:10:31.203 fused_ordering(18) 00:10:31.203 fused_ordering(19) 00:10:31.203 fused_ordering(20) 00:10:31.203 fused_ordering(21) 00:10:31.203 fused_ordering(22) 00:10:31.203 fused_ordering(23) 00:10:31.203 fused_ordering(24) 00:10:31.203 fused_ordering(25) 00:10:31.203 fused_ordering(26) 00:10:31.203 fused_ordering(27) 00:10:31.203 fused_ordering(28) 00:10:31.203 fused_ordering(29) 00:10:31.203 fused_ordering(30) 00:10:31.203 fused_ordering(31) 00:10:31.203 fused_ordering(32) 00:10:31.203 fused_ordering(33) 00:10:31.203 fused_ordering(34) 00:10:31.203 fused_ordering(35) 00:10:31.203 fused_ordering(36) 00:10:31.203 fused_ordering(37) 00:10:31.203 fused_ordering(38) 00:10:31.203 fused_ordering(39) 00:10:31.203 fused_ordering(40) 00:10:31.203 fused_ordering(41) 00:10:31.203 fused_ordering(42) 00:10:31.203 fused_ordering(43) 00:10:31.203 fused_ordering(44) 00:10:31.203 fused_ordering(45) 00:10:31.204 fused_ordering(46) 00:10:31.204 fused_ordering(47) 00:10:31.204 fused_ordering(48) 00:10:31.204 fused_ordering(49) 00:10:31.204 fused_ordering(50) 00:10:31.204 fused_ordering(51) 00:10:31.204 fused_ordering(52) 00:10:31.204 fused_ordering(53) 00:10:31.204 fused_ordering(54) 00:10:31.204 fused_ordering(55) 00:10:31.204 fused_ordering(56) 00:10:31.204 fused_ordering(57) 00:10:31.204 fused_ordering(58) 00:10:31.204 fused_ordering(59) 00:10:31.204 fused_ordering(60) 00:10:31.204 fused_ordering(61) 00:10:31.204 fused_ordering(62) 00:10:31.204 fused_ordering(63) 00:10:31.204 fused_ordering(64) 00:10:31.204 fused_ordering(65) 00:10:31.204 fused_ordering(66) 00:10:31.204 fused_ordering(67) 00:10:31.204 fused_ordering(68) 00:10:31.204 fused_ordering(69) 00:10:31.204 fused_ordering(70) 00:10:31.204 fused_ordering(71) 00:10:31.204 fused_ordering(72) 00:10:31.204 fused_ordering(73) 00:10:31.204 fused_ordering(74) 00:10:31.204 fused_ordering(75) 00:10:31.204 fused_ordering(76) 00:10:31.204 fused_ordering(77) 00:10:31.204 fused_ordering(78) 00:10:31.204 fused_ordering(79) 00:10:31.204 fused_ordering(80) 00:10:31.204 fused_ordering(81) 00:10:31.204 fused_ordering(82) 00:10:31.204 fused_ordering(83) 00:10:31.204 fused_ordering(84) 00:10:31.204 fused_ordering(85) 00:10:31.204 fused_ordering(86) 00:10:31.204 fused_ordering(87) 00:10:31.204 fused_ordering(88) 00:10:31.204 fused_ordering(89) 00:10:31.204 fused_ordering(90) 00:10:31.204 fused_ordering(91) 00:10:31.204 fused_ordering(92) 00:10:31.204 fused_ordering(93) 00:10:31.204 fused_ordering(94) 00:10:31.204 fused_ordering(95) 00:10:31.204 fused_ordering(96) 00:10:31.204 fused_ordering(97) 00:10:31.204 fused_ordering(98) 00:10:31.204 fused_ordering(99) 00:10:31.204 fused_ordering(100) 00:10:31.204 fused_ordering(101) 00:10:31.204 fused_ordering(102) 00:10:31.204 fused_ordering(103) 00:10:31.204 fused_ordering(104) 00:10:31.204 fused_ordering(105) 00:10:31.204 fused_ordering(106) 00:10:31.204 fused_ordering(107) 00:10:31.204 fused_ordering(108) 00:10:31.204 fused_ordering(109) 00:10:31.204 fused_ordering(110) 00:10:31.204 fused_ordering(111) 00:10:31.204 fused_ordering(112) 00:10:31.204 fused_ordering(113) 00:10:31.204 fused_ordering(114) 00:10:31.204 fused_ordering(115) 00:10:31.204 fused_ordering(116) 00:10:31.204 fused_ordering(117) 00:10:31.204 fused_ordering(118) 00:10:31.204 fused_ordering(119) 00:10:31.204 fused_ordering(120) 00:10:31.204 fused_ordering(121) 00:10:31.204 fused_ordering(122) 00:10:31.204 fused_ordering(123) 00:10:31.204 fused_ordering(124) 00:10:31.204 fused_ordering(125) 00:10:31.204 fused_ordering(126) 00:10:31.204 fused_ordering(127) 00:10:31.204 fused_ordering(128) 00:10:31.204 fused_ordering(129) 00:10:31.204 fused_ordering(130) 00:10:31.204 fused_ordering(131) 00:10:31.204 fused_ordering(132) 00:10:31.204 fused_ordering(133) 00:10:31.204 fused_ordering(134) 00:10:31.204 fused_ordering(135) 00:10:31.204 fused_ordering(136) 00:10:31.204 fused_ordering(137) 00:10:31.204 fused_ordering(138) 00:10:31.204 fused_ordering(139) 00:10:31.204 fused_ordering(140) 00:10:31.204 fused_ordering(141) 00:10:31.204 fused_ordering(142) 00:10:31.204 fused_ordering(143) 00:10:31.204 fused_ordering(144) 00:10:31.204 fused_ordering(145) 00:10:31.204 fused_ordering(146) 00:10:31.204 fused_ordering(147) 00:10:31.204 fused_ordering(148) 00:10:31.204 fused_ordering(149) 00:10:31.204 fused_ordering(150) 00:10:31.204 fused_ordering(151) 00:10:31.204 fused_ordering(152) 00:10:31.204 fused_ordering(153) 00:10:31.204 fused_ordering(154) 00:10:31.204 fused_ordering(155) 00:10:31.204 fused_ordering(156) 00:10:31.204 fused_ordering(157) 00:10:31.204 fused_ordering(158) 00:10:31.204 fused_ordering(159) 00:10:31.204 fused_ordering(160) 00:10:31.204 fused_ordering(161) 00:10:31.204 fused_ordering(162) 00:10:31.204 fused_ordering(163) 00:10:31.204 fused_ordering(164) 00:10:31.204 fused_ordering(165) 00:10:31.204 fused_ordering(166) 00:10:31.204 fused_ordering(167) 00:10:31.204 fused_ordering(168) 00:10:31.204 fused_ordering(169) 00:10:31.204 fused_ordering(170) 00:10:31.204 fused_ordering(171) 00:10:31.204 fused_ordering(172) 00:10:31.204 fused_ordering(173) 00:10:31.204 fused_ordering(174) 00:10:31.204 fused_ordering(175) 00:10:31.204 fused_ordering(176) 00:10:31.204 fused_ordering(177) 00:10:31.204 fused_ordering(178) 00:10:31.204 fused_ordering(179) 00:10:31.204 fused_ordering(180) 00:10:31.204 fused_ordering(181) 00:10:31.204 fused_ordering(182) 00:10:31.204 fused_ordering(183) 00:10:31.204 fused_ordering(184) 00:10:31.204 fused_ordering(185) 00:10:31.204 fused_ordering(186) 00:10:31.204 fused_ordering(187) 00:10:31.204 fused_ordering(188) 00:10:31.204 fused_ordering(189) 00:10:31.204 fused_ordering(190) 00:10:31.204 fused_ordering(191) 00:10:31.204 fused_ordering(192) 00:10:31.204 fused_ordering(193) 00:10:31.204 fused_ordering(194) 00:10:31.204 fused_ordering(195) 00:10:31.204 fused_ordering(196) 00:10:31.204 fused_ordering(197) 00:10:31.204 fused_ordering(198) 00:10:31.204 fused_ordering(199) 00:10:31.204 fused_ordering(200) 00:10:31.204 fused_ordering(201) 00:10:31.204 fused_ordering(202) 00:10:31.204 fused_ordering(203) 00:10:31.204 fused_ordering(204) 00:10:31.204 fused_ordering(205) 00:10:31.204 fused_ordering(206) 00:10:31.204 fused_ordering(207) 00:10:31.204 fused_ordering(208) 00:10:31.204 fused_ordering(209) 00:10:31.204 fused_ordering(210) 00:10:31.204 fused_ordering(211) 00:10:31.204 fused_ordering(212) 00:10:31.204 fused_ordering(213) 00:10:31.204 fused_ordering(214) 00:10:31.204 fused_ordering(215) 00:10:31.204 fused_ordering(216) 00:10:31.204 fused_ordering(217) 00:10:31.204 fused_ordering(218) 00:10:31.204 fused_ordering(219) 00:10:31.204 fused_ordering(220) 00:10:31.204 fused_ordering(221) 00:10:31.204 fused_ordering(222) 00:10:31.204 fused_ordering(223) 00:10:31.204 fused_ordering(224) 00:10:31.204 fused_ordering(225) 00:10:31.204 fused_ordering(226) 00:10:31.204 fused_ordering(227) 00:10:31.204 fused_ordering(228) 00:10:31.204 fused_ordering(229) 00:10:31.204 fused_ordering(230) 00:10:31.204 fused_ordering(231) 00:10:31.204 fused_ordering(232) 00:10:31.204 fused_ordering(233) 00:10:31.204 fused_ordering(234) 00:10:31.204 fused_ordering(235) 00:10:31.204 fused_ordering(236) 00:10:31.204 fused_ordering(237) 00:10:31.204 fused_ordering(238) 00:10:31.204 fused_ordering(239) 00:10:31.204 fused_ordering(240) 00:10:31.204 fused_ordering(241) 00:10:31.204 fused_ordering(242) 00:10:31.204 fused_ordering(243) 00:10:31.204 fused_ordering(244) 00:10:31.204 fused_ordering(245) 00:10:31.204 fused_ordering(246) 00:10:31.204 fused_ordering(247) 00:10:31.204 fused_ordering(248) 00:10:31.204 fused_ordering(249) 00:10:31.204 fused_ordering(250) 00:10:31.204 fused_ordering(251) 00:10:31.204 fused_ordering(252) 00:10:31.204 fused_ordering(253) 00:10:31.204 fused_ordering(254) 00:10:31.204 fused_ordering(255) 00:10:31.204 fused_ordering(256) 00:10:31.204 fused_ordering(257) 00:10:31.204 fused_ordering(258) 00:10:31.204 fused_ordering(259) 00:10:31.204 fused_ordering(260) 00:10:31.204 fused_ordering(261) 00:10:31.204 fused_ordering(262) 00:10:31.204 fused_ordering(263) 00:10:31.204 fused_ordering(264) 00:10:31.204 fused_ordering(265) 00:10:31.204 fused_ordering(266) 00:10:31.204 fused_ordering(267) 00:10:31.204 fused_ordering(268) 00:10:31.204 fused_ordering(269) 00:10:31.204 fused_ordering(270) 00:10:31.204 fused_ordering(271) 00:10:31.204 fused_ordering(272) 00:10:31.204 fused_ordering(273) 00:10:31.204 fused_ordering(274) 00:10:31.204 fused_ordering(275) 00:10:31.204 fused_ordering(276) 00:10:31.204 fused_ordering(277) 00:10:31.204 fused_ordering(278) 00:10:31.204 fused_ordering(279) 00:10:31.204 fused_ordering(280) 00:10:31.204 fused_ordering(281) 00:10:31.204 fused_ordering(282) 00:10:31.204 fused_ordering(283) 00:10:31.204 fused_ordering(284) 00:10:31.205 fused_ordering(285) 00:10:31.205 fused_ordering(286) 00:10:31.205 fused_ordering(287) 00:10:31.205 fused_ordering(288) 00:10:31.205 fused_ordering(289) 00:10:31.205 fused_ordering(290) 00:10:31.205 fused_ordering(291) 00:10:31.205 fused_ordering(292) 00:10:31.205 fused_ordering(293) 00:10:31.205 fused_ordering(294) 00:10:31.205 fused_ordering(295) 00:10:31.205 fused_ordering(296) 00:10:31.205 fused_ordering(297) 00:10:31.205 fused_ordering(298) 00:10:31.205 fused_ordering(299) 00:10:31.205 fused_ordering(300) 00:10:31.205 fused_ordering(301) 00:10:31.205 fused_ordering(302) 00:10:31.205 fused_ordering(303) 00:10:31.205 fused_ordering(304) 00:10:31.205 fused_ordering(305) 00:10:31.205 fused_ordering(306) 00:10:31.205 fused_ordering(307) 00:10:31.205 fused_ordering(308) 00:10:31.205 fused_ordering(309) 00:10:31.205 fused_ordering(310) 00:10:31.205 fused_ordering(311) 00:10:31.205 fused_ordering(312) 00:10:31.205 fused_ordering(313) 00:10:31.205 fused_ordering(314) 00:10:31.205 fused_ordering(315) 00:10:31.205 fused_ordering(316) 00:10:31.205 fused_ordering(317) 00:10:31.205 fused_ordering(318) 00:10:31.205 fused_ordering(319) 00:10:31.205 fused_ordering(320) 00:10:31.205 fused_ordering(321) 00:10:31.205 fused_ordering(322) 00:10:31.205 fused_ordering(323) 00:10:31.205 fused_ordering(324) 00:10:31.205 fused_ordering(325) 00:10:31.205 fused_ordering(326) 00:10:31.205 fused_ordering(327) 00:10:31.205 fused_ordering(328) 00:10:31.205 fused_ordering(329) 00:10:31.205 fused_ordering(330) 00:10:31.205 fused_ordering(331) 00:10:31.205 fused_ordering(332) 00:10:31.205 fused_ordering(333) 00:10:31.205 fused_ordering(334) 00:10:31.205 fused_ordering(335) 00:10:31.205 fused_ordering(336) 00:10:31.205 fused_ordering(337) 00:10:31.205 fused_ordering(338) 00:10:31.205 fused_ordering(339) 00:10:31.205 fused_ordering(340) 00:10:31.205 fused_ordering(341) 00:10:31.205 fused_ordering(342) 00:10:31.205 fused_ordering(343) 00:10:31.205 fused_ordering(344) 00:10:31.205 fused_ordering(345) 00:10:31.205 fused_ordering(346) 00:10:31.205 fused_ordering(347) 00:10:31.205 fused_ordering(348) 00:10:31.205 fused_ordering(349) 00:10:31.205 fused_ordering(350) 00:10:31.205 fused_ordering(351) 00:10:31.205 fused_ordering(352) 00:10:31.205 fused_ordering(353) 00:10:31.205 fused_ordering(354) 00:10:31.205 fused_ordering(355) 00:10:31.205 fused_ordering(356) 00:10:31.205 fused_ordering(357) 00:10:31.205 fused_ordering(358) 00:10:31.205 fused_ordering(359) 00:10:31.205 fused_ordering(360) 00:10:31.205 fused_ordering(361) 00:10:31.205 fused_ordering(362) 00:10:31.205 fused_ordering(363) 00:10:31.205 fused_ordering(364) 00:10:31.205 fused_ordering(365) 00:10:31.205 fused_ordering(366) 00:10:31.205 fused_ordering(367) 00:10:31.205 fused_ordering(368) 00:10:31.205 fused_ordering(369) 00:10:31.205 fused_ordering(370) 00:10:31.205 fused_ordering(371) 00:10:31.205 fused_ordering(372) 00:10:31.205 fused_ordering(373) 00:10:31.205 fused_ordering(374) 00:10:31.205 fused_ordering(375) 00:10:31.205 fused_ordering(376) 00:10:31.205 fused_ordering(377) 00:10:31.205 fused_ordering(378) 00:10:31.205 fused_ordering(379) 00:10:31.205 fused_ordering(380) 00:10:31.205 fused_ordering(381) 00:10:31.205 fused_ordering(382) 00:10:31.205 fused_ordering(383) 00:10:31.205 fused_ordering(384) 00:10:31.205 fused_ordering(385) 00:10:31.205 fused_ordering(386) 00:10:31.205 fused_ordering(387) 00:10:31.205 fused_ordering(388) 00:10:31.205 fused_ordering(389) 00:10:31.205 fused_ordering(390) 00:10:31.205 fused_ordering(391) 00:10:31.205 fused_ordering(392) 00:10:31.205 fused_ordering(393) 00:10:31.205 fused_ordering(394) 00:10:31.205 fused_ordering(395) 00:10:31.205 fused_ordering(396) 00:10:31.205 fused_ordering(397) 00:10:31.205 fused_ordering(398) 00:10:31.205 fused_ordering(399) 00:10:31.205 fused_ordering(400) 00:10:31.205 fused_ordering(401) 00:10:31.205 fused_ordering(402) 00:10:31.205 fused_ordering(403) 00:10:31.205 fused_ordering(404) 00:10:31.205 fused_ordering(405) 00:10:31.205 fused_ordering(406) 00:10:31.205 fused_ordering(407) 00:10:31.205 fused_ordering(408) 00:10:31.205 fused_ordering(409) 00:10:31.205 fused_ordering(410) 00:10:31.205 fused_ordering(411) 00:10:31.205 fused_ordering(412) 00:10:31.205 fused_ordering(413) 00:10:31.205 fused_ordering(414) 00:10:31.205 fused_ordering(415) 00:10:31.205 fused_ordering(416) 00:10:31.205 fused_ordering(417) 00:10:31.205 fused_ordering(418) 00:10:31.205 fused_ordering(419) 00:10:31.205 fused_ordering(420) 00:10:31.205 fused_ordering(421) 00:10:31.205 fused_ordering(422) 00:10:31.205 fused_ordering(423) 00:10:31.205 fused_ordering(424) 00:10:31.205 fused_ordering(425) 00:10:31.205 fused_ordering(426) 00:10:31.205 fused_ordering(427) 00:10:31.205 fused_ordering(428) 00:10:31.205 fused_ordering(429) 00:10:31.205 fused_ordering(430) 00:10:31.205 fused_ordering(431) 00:10:31.205 fused_ordering(432) 00:10:31.205 fused_ordering(433) 00:10:31.205 fused_ordering(434) 00:10:31.205 fused_ordering(435) 00:10:31.205 fused_ordering(436) 00:10:31.205 fused_ordering(437) 00:10:31.205 fused_ordering(438) 00:10:31.205 fused_ordering(439) 00:10:31.205 fused_ordering(440) 00:10:31.205 fused_ordering(441) 00:10:31.205 fused_ordering(442) 00:10:31.205 fused_ordering(443) 00:10:31.205 fused_ordering(444) 00:10:31.205 fused_ordering(445) 00:10:31.205 fused_ordering(446) 00:10:31.205 fused_ordering(447) 00:10:31.205 fused_ordering(448) 00:10:31.205 fused_ordering(449) 00:10:31.205 fused_ordering(450) 00:10:31.205 fused_ordering(451) 00:10:31.205 fused_ordering(452) 00:10:31.205 fused_ordering(453) 00:10:31.205 fused_ordering(454) 00:10:31.205 fused_ordering(455) 00:10:31.205 fused_ordering(456) 00:10:31.205 fused_ordering(457) 00:10:31.205 fused_ordering(458) 00:10:31.205 fused_ordering(459) 00:10:31.205 fused_ordering(460) 00:10:31.205 fused_ordering(461) 00:10:31.205 fused_ordering(462) 00:10:31.205 fused_ordering(463) 00:10:31.205 fused_ordering(464) 00:10:31.205 fused_ordering(465) 00:10:31.205 fused_ordering(466) 00:10:31.205 fused_ordering(467) 00:10:31.205 fused_ordering(468) 00:10:31.205 fused_ordering(469) 00:10:31.205 fused_ordering(470) 00:10:31.205 fused_ordering(471) 00:10:31.205 fused_ordering(472) 00:10:31.205 fused_ordering(473) 00:10:31.205 fused_ordering(474) 00:10:31.205 fused_ordering(475) 00:10:31.205 fused_ordering(476) 00:10:31.205 fused_ordering(477) 00:10:31.205 fused_ordering(478) 00:10:31.205 fused_ordering(479) 00:10:31.205 fused_ordering(480) 00:10:31.205 fused_ordering(481) 00:10:31.205 fused_ordering(482) 00:10:31.205 fused_ordering(483) 00:10:31.205 fused_ordering(484) 00:10:31.205 fused_ordering(485) 00:10:31.205 fused_ordering(486) 00:10:31.205 fused_ordering(487) 00:10:31.205 fused_ordering(488) 00:10:31.205 fused_ordering(489) 00:10:31.205 fused_ordering(490) 00:10:31.205 fused_ordering(491) 00:10:31.205 fused_ordering(492) 00:10:31.205 fused_ordering(493) 00:10:31.205 fused_ordering(494) 00:10:31.205 fused_ordering(495) 00:10:31.205 fused_ordering(496) 00:10:31.205 fused_ordering(497) 00:10:31.205 fused_ordering(498) 00:10:31.205 fused_ordering(499) 00:10:31.205 fused_ordering(500) 00:10:31.205 fused_ordering(501) 00:10:31.205 fused_ordering(502) 00:10:31.205 fused_ordering(503) 00:10:31.205 fused_ordering(504) 00:10:31.205 fused_ordering(505) 00:10:31.205 fused_ordering(506) 00:10:31.205 fused_ordering(507) 00:10:31.205 fused_ordering(508) 00:10:31.205 fused_ordering(509) 00:10:31.205 fused_ordering(510) 00:10:31.205 fused_ordering(511) 00:10:31.205 fused_ordering(512) 00:10:31.205 fused_ordering(513) 00:10:31.205 fused_ordering(514) 00:10:31.205 fused_ordering(515) 00:10:31.205 fused_ordering(516) 00:10:31.205 fused_ordering(517) 00:10:31.205 fused_ordering(518) 00:10:31.205 fused_ordering(519) 00:10:31.205 fused_ordering(520) 00:10:31.205 fused_ordering(521) 00:10:31.205 fused_ordering(522) 00:10:31.205 fused_ordering(523) 00:10:31.205 fused_ordering(524) 00:10:31.205 fused_ordering(525) 00:10:31.205 fused_ordering(526) 00:10:31.205 fused_ordering(527) 00:10:31.205 fused_ordering(528) 00:10:31.205 fused_ordering(529) 00:10:31.205 fused_ordering(530) 00:10:31.205 fused_ordering(531) 00:10:31.205 fused_ordering(532) 00:10:31.205 fused_ordering(533) 00:10:31.205 fused_ordering(534) 00:10:31.205 fused_ordering(535) 00:10:31.205 fused_ordering(536) 00:10:31.205 fused_ordering(537) 00:10:31.205 fused_ordering(538) 00:10:31.205 fused_ordering(539) 00:10:31.205 fused_ordering(540) 00:10:31.205 fused_ordering(541) 00:10:31.205 fused_ordering(542) 00:10:31.205 fused_ordering(543) 00:10:31.205 fused_ordering(544) 00:10:31.205 fused_ordering(545) 00:10:31.205 fused_ordering(546) 00:10:31.205 fused_ordering(547) 00:10:31.205 fused_ordering(548) 00:10:31.205 fused_ordering(549) 00:10:31.205 fused_ordering(550) 00:10:31.205 fused_ordering(551) 00:10:31.205 fused_ordering(552) 00:10:31.205 fused_ordering(553) 00:10:31.205 fused_ordering(554) 00:10:31.205 fused_ordering(555) 00:10:31.205 fused_ordering(556) 00:10:31.205 fused_ordering(557) 00:10:31.205 fused_ordering(558) 00:10:31.205 fused_ordering(559) 00:10:31.205 fused_ordering(560) 00:10:31.205 fused_ordering(561) 00:10:31.205 fused_ordering(562) 00:10:31.205 fused_ordering(563) 00:10:31.205 fused_ordering(564) 00:10:31.205 fused_ordering(565) 00:10:31.205 fused_ordering(566) 00:10:31.205 fused_ordering(567) 00:10:31.205 fused_ordering(568) 00:10:31.205 fused_ordering(569) 00:10:31.205 fused_ordering(570) 00:10:31.205 fused_ordering(571) 00:10:31.205 fused_ordering(572) 00:10:31.205 fused_ordering(573) 00:10:31.205 fused_ordering(574) 00:10:31.206 fused_ordering(575) 00:10:31.206 fused_ordering(576) 00:10:31.206 fused_ordering(577) 00:10:31.206 fused_ordering(578) 00:10:31.206 fused_ordering(579) 00:10:31.206 fused_ordering(580) 00:10:31.206 fused_ordering(581) 00:10:31.206 fused_ordering(582) 00:10:31.206 fused_ordering(583) 00:10:31.206 fused_ordering(584) 00:10:31.206 fused_ordering(585) 00:10:31.206 fused_ordering(586) 00:10:31.206 fused_ordering(587) 00:10:31.206 fused_ordering(588) 00:10:31.206 fused_ordering(589) 00:10:31.206 fused_ordering(590) 00:10:31.206 fused_ordering(591) 00:10:31.206 fused_ordering(592) 00:10:31.206 fused_ordering(593) 00:10:31.206 fused_ordering(594) 00:10:31.206 fused_ordering(595) 00:10:31.206 fused_ordering(596) 00:10:31.206 fused_ordering(597) 00:10:31.206 fused_ordering(598) 00:10:31.206 fused_ordering(599) 00:10:31.206 fused_ordering(600) 00:10:31.206 fused_ordering(601) 00:10:31.206 fused_ordering(602) 00:10:31.206 fused_ordering(603) 00:10:31.206 fused_ordering(604) 00:10:31.206 fused_ordering(605) 00:10:31.206 fused_ordering(606) 00:10:31.206 fused_ordering(607) 00:10:31.206 fused_ordering(608) 00:10:31.206 fused_ordering(609) 00:10:31.206 fused_ordering(610) 00:10:31.206 fused_ordering(611) 00:10:31.206 fused_ordering(612) 00:10:31.206 fused_ordering(613) 00:10:31.206 fused_ordering(614) 00:10:31.206 fused_ordering(615) 00:10:31.464 fused_ordering(616) 00:10:31.464 fused_ordering(617) 00:10:31.464 fused_ordering(618) 00:10:31.464 fused_ordering(619) 00:10:31.464 fused_ordering(620) 00:10:31.464 fused_ordering(621) 00:10:31.464 fused_ordering(622) 00:10:31.464 fused_ordering(623) 00:10:31.464 fused_ordering(624) 00:10:31.464 fused_ordering(625) 00:10:31.464 fused_ordering(626) 00:10:31.464 fused_ordering(627) 00:10:31.464 fused_ordering(628) 00:10:31.464 fused_ordering(629) 00:10:31.464 fused_ordering(630) 00:10:31.464 fused_ordering(631) 00:10:31.464 fused_ordering(632) 00:10:31.464 fused_ordering(633) 00:10:31.464 fused_ordering(634) 00:10:31.464 fused_ordering(635) 00:10:31.464 fused_ordering(636) 00:10:31.464 fused_ordering(637) 00:10:31.464 fused_ordering(638) 00:10:31.464 fused_ordering(639) 00:10:31.464 fused_ordering(640) 00:10:31.464 fused_ordering(641) 00:10:31.464 fused_ordering(642) 00:10:31.464 fused_ordering(643) 00:10:31.464 fused_ordering(644) 00:10:31.464 fused_ordering(645) 00:10:31.464 fused_ordering(646) 00:10:31.464 fused_ordering(647) 00:10:31.464 fused_ordering(648) 00:10:31.464 fused_ordering(649) 00:10:31.464 fused_ordering(650) 00:10:31.464 fused_ordering(651) 00:10:31.464 fused_ordering(652) 00:10:31.464 fused_ordering(653) 00:10:31.464 fused_ordering(654) 00:10:31.464 fused_ordering(655) 00:10:31.464 fused_ordering(656) 00:10:31.464 fused_ordering(657) 00:10:31.464 fused_ordering(658) 00:10:31.464 fused_ordering(659) 00:10:31.464 fused_ordering(660) 00:10:31.464 fused_ordering(661) 00:10:31.464 fused_ordering(662) 00:10:31.464 fused_ordering(663) 00:10:31.464 fused_ordering(664) 00:10:31.464 fused_ordering(665) 00:10:31.464 fused_ordering(666) 00:10:31.464 fused_ordering(667) 00:10:31.464 fused_ordering(668) 00:10:31.464 fused_ordering(669) 00:10:31.464 fused_ordering(670) 00:10:31.464 fused_ordering(671) 00:10:31.464 fused_ordering(672) 00:10:31.464 fused_ordering(673) 00:10:31.464 fused_ordering(674) 00:10:31.464 fused_ordering(675) 00:10:31.464 fused_ordering(676) 00:10:31.464 fused_ordering(677) 00:10:31.464 fused_ordering(678) 00:10:31.464 fused_ordering(679) 00:10:31.464 fused_ordering(680) 00:10:31.464 fused_ordering(681) 00:10:31.464 fused_ordering(682) 00:10:31.464 fused_ordering(683) 00:10:31.464 fused_ordering(684) 00:10:31.464 fused_ordering(685) 00:10:31.464 fused_ordering(686) 00:10:31.464 fused_ordering(687) 00:10:31.464 fused_ordering(688) 00:10:31.464 fused_ordering(689) 00:10:31.464 fused_ordering(690) 00:10:31.464 fused_ordering(691) 00:10:31.464 fused_ordering(692) 00:10:31.464 fused_ordering(693) 00:10:31.464 fused_ordering(694) 00:10:31.464 fused_ordering(695) 00:10:31.464 fused_ordering(696) 00:10:31.464 fused_ordering(697) 00:10:31.464 fused_ordering(698) 00:10:31.464 fused_ordering(699) 00:10:31.465 fused_ordering(700) 00:10:31.465 fused_ordering(701) 00:10:31.465 fused_ordering(702) 00:10:31.465 fused_ordering(703) 00:10:31.465 fused_ordering(704) 00:10:31.465 fused_ordering(705) 00:10:31.465 fused_ordering(706) 00:10:31.465 fused_ordering(707) 00:10:31.465 fused_ordering(708) 00:10:31.465 fused_ordering(709) 00:10:31.465 fused_ordering(710) 00:10:31.465 fused_ordering(711) 00:10:31.465 fused_ordering(712) 00:10:31.465 fused_ordering(713) 00:10:31.465 fused_ordering(714) 00:10:31.465 fused_ordering(715) 00:10:31.465 fused_ordering(716) 00:10:31.465 fused_ordering(717) 00:10:31.465 fused_ordering(718) 00:10:31.465 fused_ordering(719) 00:10:31.465 fused_ordering(720) 00:10:31.465 fused_ordering(721) 00:10:31.465 fused_ordering(722) 00:10:31.465 fused_ordering(723) 00:10:31.465 fused_ordering(724) 00:10:31.465 fused_ordering(725) 00:10:31.465 fused_ordering(726) 00:10:31.465 fused_ordering(727) 00:10:31.465 fused_ordering(728) 00:10:31.465 fused_ordering(729) 00:10:31.465 fused_ordering(730) 00:10:31.465 fused_ordering(731) 00:10:31.465 fused_ordering(732) 00:10:31.465 fused_ordering(733) 00:10:31.465 fused_ordering(734) 00:10:31.465 fused_ordering(735) 00:10:31.465 fused_ordering(736) 00:10:31.465 fused_ordering(737) 00:10:31.465 fused_ordering(738) 00:10:31.465 fused_ordering(739) 00:10:31.465 fused_ordering(740) 00:10:31.465 fused_ordering(741) 00:10:31.465 fused_ordering(742) 00:10:31.465 fused_ordering(743) 00:10:31.465 fused_ordering(744) 00:10:31.465 fused_ordering(745) 00:10:31.465 fused_ordering(746) 00:10:31.465 fused_ordering(747) 00:10:31.465 fused_ordering(748) 00:10:31.465 fused_ordering(749) 00:10:31.465 fused_ordering(750) 00:10:31.465 fused_ordering(751) 00:10:31.465 fused_ordering(752) 00:10:31.465 fused_ordering(753) 00:10:31.465 fused_ordering(754) 00:10:31.465 fused_ordering(755) 00:10:31.465 fused_ordering(756) 00:10:31.465 fused_ordering(757) 00:10:31.465 fused_ordering(758) 00:10:31.465 fused_ordering(759) 00:10:31.465 fused_ordering(760) 00:10:31.465 fused_ordering(761) 00:10:31.465 fused_ordering(762) 00:10:31.465 fused_ordering(763) 00:10:31.465 fused_ordering(764) 00:10:31.465 fused_ordering(765) 00:10:31.465 fused_ordering(766) 00:10:31.465 fused_ordering(767) 00:10:31.465 fused_ordering(768) 00:10:31.465 fused_ordering(769) 00:10:31.465 fused_ordering(770) 00:10:31.465 fused_ordering(771) 00:10:31.465 fused_ordering(772) 00:10:31.465 fused_ordering(773) 00:10:31.465 fused_ordering(774) 00:10:31.465 fused_ordering(775) 00:10:31.465 fused_ordering(776) 00:10:31.465 fused_ordering(777) 00:10:31.465 fused_ordering(778) 00:10:31.465 fused_ordering(779) 00:10:31.465 fused_ordering(780) 00:10:31.465 fused_ordering(781) 00:10:31.465 fused_ordering(782) 00:10:31.465 fused_ordering(783) 00:10:31.465 fused_ordering(784) 00:10:31.465 fused_ordering(785) 00:10:31.465 fused_ordering(786) 00:10:31.465 fused_ordering(787) 00:10:31.465 fused_ordering(788) 00:10:31.465 fused_ordering(789) 00:10:31.465 fused_ordering(790) 00:10:31.465 fused_ordering(791) 00:10:31.465 fused_ordering(792) 00:10:31.465 fused_ordering(793) 00:10:31.465 fused_ordering(794) 00:10:31.465 fused_ordering(795) 00:10:31.465 fused_ordering(796) 00:10:31.465 fused_ordering(797) 00:10:31.465 fused_ordering(798) 00:10:31.465 fused_ordering(799) 00:10:31.465 fused_ordering(800) 00:10:31.465 fused_ordering(801) 00:10:31.465 fused_ordering(802) 00:10:31.465 fused_ordering(803) 00:10:31.465 fused_ordering(804) 00:10:31.465 fused_ordering(805) 00:10:31.465 fused_ordering(806) 00:10:31.465 fused_ordering(807) 00:10:31.465 fused_ordering(808) 00:10:31.465 fused_ordering(809) 00:10:31.465 fused_ordering(810) 00:10:31.465 fused_ordering(811) 00:10:31.465 fused_ordering(812) 00:10:31.465 fused_ordering(813) 00:10:31.465 fused_ordering(814) 00:10:31.465 fused_ordering(815) 00:10:31.465 fused_ordering(816) 00:10:31.465 fused_ordering(817) 00:10:31.465 fused_ordering(818) 00:10:31.465 fused_ordering(819) 00:10:31.465 fused_ordering(820) 00:10:31.724 fused_ordering(821) 00:10:31.724 fused_ordering(822) 00:10:31.724 fused_ordering(823) 00:10:31.724 fused_ordering(824) 00:10:31.724 fused_ordering(825) 00:10:31.724 fused_ordering(826) 00:10:31.724 fused_ordering(827) 00:10:31.724 fused_ordering(828) 00:10:31.724 fused_ordering(829) 00:10:31.724 fused_ordering(830) 00:10:31.724 fused_ordering(831) 00:10:31.724 fused_ordering(832) 00:10:31.724 fused_ordering(833) 00:10:31.724 fused_ordering(834) 00:10:31.724 fused_ordering(835) 00:10:31.724 fused_ordering(836) 00:10:31.724 fused_ordering(837) 00:10:31.724 fused_ordering(838) 00:10:31.724 fused_ordering(839) 00:10:31.724 fused_ordering(840) 00:10:31.724 fused_ordering(841) 00:10:31.724 fused_ordering(842) 00:10:31.724 fused_ordering(843) 00:10:31.724 fused_ordering(844) 00:10:31.724 fused_ordering(845) 00:10:31.724 fused_ordering(846) 00:10:31.724 fused_ordering(847) 00:10:31.724 fused_ordering(848) 00:10:31.724 fused_ordering(849) 00:10:31.724 fused_ordering(850) 00:10:31.724 fused_ordering(851) 00:10:31.724 fused_ordering(852) 00:10:31.724 fused_ordering(853) 00:10:31.724 fused_ordering(854) 00:10:31.724 fused_ordering(855) 00:10:31.724 fused_ordering(856) 00:10:31.724 fused_ordering(857) 00:10:31.724 fused_ordering(858) 00:10:31.724 fused_ordering(859) 00:10:31.724 fused_ordering(860) 00:10:31.724 fused_ordering(861) 00:10:31.724 fused_ordering(862) 00:10:31.724 fused_ordering(863) 00:10:31.724 fused_ordering(864) 00:10:31.724 fused_ordering(865) 00:10:31.724 fused_ordering(866) 00:10:31.724 fused_ordering(867) 00:10:31.724 fused_ordering(868) 00:10:31.724 fused_ordering(869) 00:10:31.724 fused_ordering(870) 00:10:31.724 fused_ordering(871) 00:10:31.724 fused_ordering(872) 00:10:31.724 fused_ordering(873) 00:10:31.724 fused_ordering(874) 00:10:31.724 fused_ordering(875) 00:10:31.724 fused_ordering(876) 00:10:31.724 fused_ordering(877) 00:10:31.724 fused_ordering(878) 00:10:31.724 fused_ordering(879) 00:10:31.724 fused_ordering(880) 00:10:31.724 fused_ordering(881) 00:10:31.724 fused_ordering(882) 00:10:31.724 fused_ordering(883) 00:10:31.724 fused_ordering(884) 00:10:31.724 fused_ordering(885) 00:10:31.724 fused_ordering(886) 00:10:31.724 fused_ordering(887) 00:10:31.724 fused_ordering(888) 00:10:31.724 fused_ordering(889) 00:10:31.724 fused_ordering(890) 00:10:31.724 fused_ordering(891) 00:10:31.724 fused_ordering(892) 00:10:31.724 fused_ordering(893) 00:10:31.724 fused_ordering(894) 00:10:31.724 fused_ordering(895) 00:10:31.724 fused_ordering(896) 00:10:31.724 fused_ordering(897) 00:10:31.724 fused_ordering(898) 00:10:31.724 fused_ordering(899) 00:10:31.724 fused_ordering(900) 00:10:31.724 fused_ordering(901) 00:10:31.724 fused_ordering(902) 00:10:31.724 fused_ordering(903) 00:10:31.724 fused_ordering(904) 00:10:31.724 fused_ordering(905) 00:10:31.724 fused_ordering(906) 00:10:31.724 fused_ordering(907) 00:10:31.724 fused_ordering(908) 00:10:31.724 fused_ordering(909) 00:10:31.724 fused_ordering(910) 00:10:31.724 fused_ordering(911) 00:10:31.724 fused_ordering(912) 00:10:31.724 fused_ordering(913) 00:10:31.724 fused_ordering(914) 00:10:31.724 fused_ordering(915) 00:10:31.724 fused_ordering(916) 00:10:31.724 fused_ordering(917) 00:10:31.724 fused_ordering(918) 00:10:31.724 fused_ordering(919) 00:10:31.724 fused_ordering(920) 00:10:31.724 fused_ordering(921) 00:10:31.724 fused_ordering(922) 00:10:31.724 fused_ordering(923) 00:10:31.724 fused_ordering(924) 00:10:31.724 fused_ordering(925) 00:10:31.724 fused_ordering(926) 00:10:31.724 fused_ordering(927) 00:10:31.724 fused_ordering(928) 00:10:31.724 fused_ordering(929) 00:10:31.724 fused_ordering(930) 00:10:31.724 fused_ordering(931) 00:10:31.724 fused_ordering(932) 00:10:31.724 fused_ordering(933) 00:10:31.724 fused_ordering(934) 00:10:31.724 fused_ordering(935) 00:10:31.724 fused_ordering(936) 00:10:31.724 fused_ordering(937) 00:10:31.724 fused_ordering(938) 00:10:31.724 fused_ordering(939) 00:10:31.724 fused_ordering(940) 00:10:31.724 fused_ordering(941) 00:10:31.724 fused_ordering(942) 00:10:31.724 fused_ordering(943) 00:10:31.724 fused_ordering(944) 00:10:31.724 fused_ordering(945) 00:10:31.724 fused_ordering(946) 00:10:31.724 fused_ordering(947) 00:10:31.724 fused_ordering(948) 00:10:31.724 fused_ordering(949) 00:10:31.724 fused_ordering(950) 00:10:31.724 fused_ordering(951) 00:10:31.724 fused_ordering(952) 00:10:31.724 fused_ordering(953) 00:10:31.724 fused_ordering(954) 00:10:31.724 fused_ordering(955) 00:10:31.724 fused_ordering(956) 00:10:31.724 fused_ordering(957) 00:10:31.724 fused_ordering(958) 00:10:31.724 fused_ordering(959) 00:10:31.724 fused_ordering(960) 00:10:31.724 fused_ordering(961) 00:10:31.724 fused_ordering(962) 00:10:31.724 fused_ordering(963) 00:10:31.724 fused_ordering(964) 00:10:31.724 fused_ordering(965) 00:10:31.724 fused_ordering(966) 00:10:31.724 fused_ordering(967) 00:10:31.724 fused_ordering(968) 00:10:31.724 fused_ordering(969) 00:10:31.724 fused_ordering(970) 00:10:31.724 fused_ordering(971) 00:10:31.724 fused_ordering(972) 00:10:31.724 fused_ordering(973) 00:10:31.724 fused_ordering(974) 00:10:31.724 fused_ordering(975) 00:10:31.724 fused_ordering(976) 00:10:31.724 fused_ordering(977) 00:10:31.724 fused_ordering(978) 00:10:31.724 fused_ordering(979) 00:10:31.724 fused_ordering(980) 00:10:31.724 fused_ordering(981) 00:10:31.724 fused_ordering(982) 00:10:31.724 fused_ordering(983) 00:10:31.724 fused_ordering(984) 00:10:31.724 fused_ordering(985) 00:10:31.724 fused_ordering(986) 00:10:31.724 fused_ordering(987) 00:10:31.724 fused_ordering(988) 00:10:31.724 fused_ordering(989) 00:10:31.724 fused_ordering(990) 00:10:31.724 fused_ordering(991) 00:10:31.724 fused_ordering(992) 00:10:31.724 fused_ordering(993) 00:10:31.724 fused_ordering(994) 00:10:31.724 fused_ordering(995) 00:10:31.724 fused_ordering(996) 00:10:31.724 fused_ordering(997) 00:10:31.724 fused_ordering(998) 00:10:31.724 fused_ordering(999) 00:10:31.724 fused_ordering(1000) 00:10:31.724 fused_ordering(1001) 00:10:31.724 fused_ordering(1002) 00:10:31.724 fused_ordering(1003) 00:10:31.724 fused_ordering(1004) 00:10:31.724 fused_ordering(1005) 00:10:31.724 fused_ordering(1006) 00:10:31.724 fused_ordering(1007) 00:10:31.724 fused_ordering(1008) 00:10:31.724 fused_ordering(1009) 00:10:31.724 fused_ordering(1010) 00:10:31.724 fused_ordering(1011) 00:10:31.724 fused_ordering(1012) 00:10:31.724 fused_ordering(1013) 00:10:31.724 fused_ordering(1014) 00:10:31.724 fused_ordering(1015) 00:10:31.724 fused_ordering(1016) 00:10:31.724 fused_ordering(1017) 00:10:31.724 fused_ordering(1018) 00:10:31.724 fused_ordering(1019) 00:10:31.724 fused_ordering(1020) 00:10:31.724 fused_ordering(1021) 00:10:31.724 fused_ordering(1022) 00:10:31.724 fused_ordering(1023) 00:10:31.724 20:19:44 nvmf_rdma.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:10:31.724 20:19:44 nvmf_rdma.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:10:31.724 20:19:44 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:31.724 20:19:44 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:10:31.724 20:19:44 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:10:31.724 20:19:44 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:10:31.724 20:19:44 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:10:31.724 20:19:44 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:31.724 20:19:44 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:10:31.724 rmmod nvme_rdma 00:10:31.724 rmmod nvme_fabrics 00:10:31.724 20:19:44 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:31.724 20:19:44 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:10:31.724 20:19:44 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:10:31.724 20:19:44 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 2964678 ']' 00:10:31.724 20:19:44 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 2964678 00:10:31.724 20:19:44 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@946 -- # '[' -z 2964678 ']' 00:10:31.724 20:19:44 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@950 -- # kill -0 2964678 00:10:31.724 20:19:44 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@951 -- # uname 00:10:31.724 20:19:44 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:10:31.724 20:19:44 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2964678 00:10:31.724 20:19:44 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:10:31.724 20:19:44 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:10:31.724 20:19:44 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2964678' 00:10:31.724 killing process with pid 2964678 00:10:31.724 20:19:44 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@965 -- # kill 2964678 00:10:31.724 [2024-05-16 20:19:44.566268] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:10:31.724 20:19:44 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@970 -- # wait 2964678 00:10:31.724 [2024-05-16 20:19:44.605678] rdma.c:2885:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:10:31.982 20:19:44 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:31.982 20:19:44 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:10:31.982 00:10:31.982 real 0m8.563s 00:10:31.982 user 0m4.593s 00:10:31.982 sys 0m5.208s 00:10:31.982 20:19:44 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:31.982 20:19:44 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:31.982 ************************************ 00:10:31.982 END TEST nvmf_fused_ordering 00:10:31.982 ************************************ 00:10:31.982 20:19:44 nvmf_rdma -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=rdma 00:10:31.982 20:19:44 nvmf_rdma -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:10:31.982 20:19:44 nvmf_rdma -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:31.982 20:19:44 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:10:31.982 ************************************ 00:10:31.982 START TEST nvmf_delete_subsystem 00:10:31.982 ************************************ 00:10:31.982 20:19:44 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=rdma 00:10:31.982 * Looking for test storage... 00:10:31.982 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:10:31.982 20:19:44 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:10:31.982 20:19:44 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:10:31.982 20:19:44 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:31.982 20:19:44 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:31.982 20:19:44 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:31.982 20:19:44 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:31.982 20:19:44 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:31.982 20:19:44 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:31.982 20:19:44 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:31.982 20:19:44 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:31.982 20:19:44 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:31.982 20:19:44 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:31.982 20:19:44 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:10:31.982 20:19:44 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:10:31.982 20:19:44 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:31.982 20:19:44 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:31.982 20:19:44 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:31.982 20:19:44 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:31.982 20:19:44 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:10:31.982 20:19:44 nvmf_rdma.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:31.982 20:19:44 nvmf_rdma.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:31.982 20:19:44 nvmf_rdma.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:31.983 20:19:44 nvmf_rdma.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:31.983 20:19:44 nvmf_rdma.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:31.983 20:19:44 nvmf_rdma.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:31.983 20:19:44 nvmf_rdma.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:10:31.983 20:19:44 nvmf_rdma.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:31.983 20:19:44 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:10:31.983 20:19:44 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:31.983 20:19:44 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:31.983 20:19:44 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:31.983 20:19:44 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:31.983 20:19:44 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:31.983 20:19:44 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:31.983 20:19:44 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:31.983 20:19:44 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:31.983 20:19:44 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:10:31.983 20:19:44 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:10:31.983 20:19:44 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:31.983 20:19:44 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:31.983 20:19:44 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:31.983 20:19:44 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:31.983 20:19:44 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:31.983 20:19:44 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:31.983 20:19:44 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:31.983 20:19:44 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:31.983 20:19:44 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:31.983 20:19:44 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@285 -- # xtrace_disable 00:10:31.983 20:19:44 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:38.550 20:19:50 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:38.550 20:19:50 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # pci_devs=() 00:10:38.550 20:19:50 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:38.550 20:19:50 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:38.550 20:19:50 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:38.550 20:19:50 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:38.550 20:19:50 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:38.550 20:19:50 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # net_devs=() 00:10:38.550 20:19:50 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:38.550 20:19:50 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # e810=() 00:10:38.550 20:19:50 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # local -ga e810 00:10:38.550 20:19:50 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # x722=() 00:10:38.550 20:19:50 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # local -ga x722 00:10:38.550 20:19:50 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # mlx=() 00:10:38.550 20:19:50 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # local -ga mlx 00:10:38.550 20:19:50 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:38.550 20:19:50 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:38.550 20:19:50 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:38.550 20:19:50 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:38.550 20:19:50 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:38.550 20:19:50 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:38.550 20:19:50 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:38.550 20:19:50 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:38.550 20:19:50 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:38.550 20:19:50 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:38.550 20:19:50 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:38.550 20:19:50 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:38.550 20:19:50 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:10:38.550 20:19:50 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:10:38.550 20:19:50 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:10:38.550 20:19:50 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:10:38.550 20:19:50 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:10:38.550 20:19:50 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:38.550 20:19:50 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:38.550 20:19:50 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:10:38.550 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:10:38.550 20:19:50 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:10:38.550 20:19:50 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:10:38.550 20:19:50 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:10:38.550 20:19:50 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:10:38.550 20:19:50 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:10:38.550 20:19:50 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:10:38.550 20:19:50 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:38.550 20:19:50 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:10:38.550 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:10:38.550 20:19:50 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:10:38.550 20:19:50 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:10:38.550 20:19:50 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:10:38.550 20:19:50 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:10:38.550 20:19:50 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:10:38.550 20:19:50 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:10:38.550 20:19:50 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:38.550 20:19:50 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:10:38.550 20:19:50 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:38.550 20:19:50 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:38.550 20:19:50 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:10:38.550 20:19:50 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:38.550 20:19:50 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:38.550 20:19:50 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:10:38.550 Found net devices under 0000:da:00.0: mlx_0_0 00:10:38.550 20:19:50 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:38.550 20:19:50 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:38.550 20:19:50 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:38.550 20:19:50 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:10:38.550 20:19:50 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:38.550 20:19:50 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:38.550 20:19:50 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:10:38.550 Found net devices under 0000:da:00.1: mlx_0_1 00:10:38.550 20:19:50 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:38.550 20:19:50 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:38.550 20:19:50 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # is_hw=yes 00:10:38.550 20:19:50 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:38.550 20:19:50 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:10:38.550 20:19:50 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:10:38.550 20:19:50 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@420 -- # rdma_device_init 00:10:38.550 20:19:50 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:10:38.550 20:19:50 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@58 -- # uname 00:10:38.550 20:19:50 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:10:38.550 20:19:50 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@62 -- # modprobe ib_cm 00:10:38.550 20:19:50 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@63 -- # modprobe ib_core 00:10:38.550 20:19:50 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@64 -- # modprobe ib_umad 00:10:38.550 20:19:50 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:10:38.550 20:19:50 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@66 -- # modprobe iw_cm 00:10:38.550 20:19:50 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:10:38.550 20:19:50 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:10:38.550 20:19:50 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # allocate_nic_ips 00:10:38.550 20:19:50 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:10:38.550 20:19:50 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@73 -- # get_rdma_if_list 00:10:38.550 20:19:50 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:38.550 20:19:50 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:10:38.550 20:19:50 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:10:38.550 20:19:50 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:38.550 20:19:50 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:10:38.550 20:19:50 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:38.550 20:19:50 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:38.550 20:19:50 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:38.550 20:19:50 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@104 -- # echo mlx_0_0 00:10:38.550 20:19:50 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@105 -- # continue 2 00:10:38.550 20:19:50 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:38.550 20:19:50 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:38.550 20:19:50 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:38.550 20:19:50 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:38.550 20:19:50 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:38.550 20:19:50 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@104 -- # echo mlx_0_1 00:10:38.550 20:19:50 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@105 -- # continue 2 00:10:38.550 20:19:50 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:10:38.550 20:19:50 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:10:38.551 20:19:50 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:10:38.551 20:19:50 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:10:38.551 20:19:50 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:38.551 20:19:50 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:38.551 20:19:50 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:10:38.551 20:19:50 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:10:38.551 20:19:50 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:10:38.551 258: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:38.551 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:10:38.551 altname enp218s0f0np0 00:10:38.551 altname ens818f0np0 00:10:38.551 inet 192.168.100.8/24 scope global mlx_0_0 00:10:38.551 valid_lft forever preferred_lft forever 00:10:38.551 20:19:50 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:10:38.551 20:19:50 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:10:38.551 20:19:50 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:10:38.551 20:19:50 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:38.551 20:19:50 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:10:38.551 20:19:50 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:38.551 20:19:50 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:10:38.551 20:19:50 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:10:38.551 20:19:50 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:10:38.551 259: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:38.551 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:10:38.551 altname enp218s0f1np1 00:10:38.551 altname ens818f1np1 00:10:38.551 inet 192.168.100.9/24 scope global mlx_0_1 00:10:38.551 valid_lft forever preferred_lft forever 00:10:38.551 20:19:50 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # return 0 00:10:38.551 20:19:50 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:38.551 20:19:50 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:10:38.551 20:19:50 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:10:38.551 20:19:50 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:10:38.551 20:19:51 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@86 -- # get_rdma_if_list 00:10:38.551 20:19:51 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:38.551 20:19:51 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:10:38.551 20:19:51 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:10:38.551 20:19:51 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:38.551 20:19:51 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:10:38.551 20:19:51 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:38.551 20:19:51 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:38.551 20:19:51 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:38.551 20:19:51 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@104 -- # echo mlx_0_0 00:10:38.551 20:19:51 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@105 -- # continue 2 00:10:38.551 20:19:51 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:38.551 20:19:51 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:38.551 20:19:51 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:38.551 20:19:51 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:38.551 20:19:51 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:38.551 20:19:51 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@104 -- # echo mlx_0_1 00:10:38.551 20:19:51 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@105 -- # continue 2 00:10:38.551 20:19:51 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:10:38.551 20:19:51 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:10:38.551 20:19:51 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:10:38.551 20:19:51 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:10:38.551 20:19:51 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:38.551 20:19:51 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:38.551 20:19:51 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:10:38.551 20:19:51 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:10:38.551 20:19:51 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:10:38.551 20:19:51 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:10:38.551 20:19:51 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:38.551 20:19:51 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:38.551 20:19:51 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:10:38.551 192.168.100.9' 00:10:38.551 20:19:51 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:10:38.551 192.168.100.9' 00:10:38.551 20:19:51 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@457 -- # head -n 1 00:10:38.551 20:19:51 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:10:38.551 20:19:51 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:10:38.551 192.168.100.9' 00:10:38.551 20:19:51 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@458 -- # tail -n +2 00:10:38.551 20:19:51 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@458 -- # head -n 1 00:10:38.551 20:19:51 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:10:38.551 20:19:51 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:10:38.551 20:19:51 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:10:38.551 20:19:51 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:10:38.551 20:19:51 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:10:38.551 20:19:51 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:10:38.551 20:19:51 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:10:38.551 20:19:51 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:38.551 20:19:51 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@720 -- # xtrace_disable 00:10:38.551 20:19:51 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:38.551 20:19:51 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=2968487 00:10:38.551 20:19:51 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 2968487 00:10:38.551 20:19:51 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:10:38.551 20:19:51 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@827 -- # '[' -z 2968487 ']' 00:10:38.551 20:19:51 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:38.551 20:19:51 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@832 -- # local max_retries=100 00:10:38.551 20:19:51 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:38.551 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:38.551 20:19:51 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # xtrace_disable 00:10:38.551 20:19:51 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:38.551 [2024-05-16 20:19:51.133587] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:10:38.551 [2024-05-16 20:19:51.133639] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:38.551 EAL: No free 2048 kB hugepages reported on node 1 00:10:38.551 [2024-05-16 20:19:51.197091] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:38.551 [2024-05-16 20:19:51.278872] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:38.551 [2024-05-16 20:19:51.278911] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:38.551 [2024-05-16 20:19:51.278918] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:38.551 [2024-05-16 20:19:51.278924] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:38.551 [2024-05-16 20:19:51.278929] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:38.551 [2024-05-16 20:19:51.278973] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:38.551 [2024-05-16 20:19:51.278976] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:39.119 20:19:51 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:10:39.119 20:19:51 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@860 -- # return 0 00:10:39.119 20:19:51 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:39.119 20:19:51 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:39.119 20:19:51 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:39.119 20:19:51 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:39.119 20:19:51 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:10:39.119 20:19:51 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:39.119 20:19:51 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:39.119 [2024-05-16 20:19:51.989922] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xea92f0/0xead7e0) succeed. 00:10:39.119 [2024-05-16 20:19:51.998744] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xeaa7f0/0xeeee70) succeed. 00:10:39.119 20:19:52 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:39.119 20:19:52 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:39.119 20:19:52 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:39.119 20:19:52 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:39.119 20:19:52 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:39.119 20:19:52 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:10:39.119 20:19:52 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:39.119 20:19:52 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:39.119 [2024-05-16 20:19:52.078074] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:10:39.119 [2024-05-16 20:19:52.078438] rdma.c:3032:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:10:39.119 20:19:52 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:39.119 20:19:52 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:10:39.119 20:19:52 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:39.119 20:19:52 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:39.119 NULL1 00:10:39.119 20:19:52 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:39.119 20:19:52 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:39.119 20:19:52 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:39.119 20:19:52 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:39.119 Delay0 00:10:39.119 20:19:52 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:39.119 20:19:52 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:39.119 20:19:52 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:39.119 20:19:52 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:39.119 20:19:52 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:39.119 20:19:52 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=2968734 00:10:39.119 20:19:52 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:10:39.119 20:19:52 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:10:39.377 EAL: No free 2048 kB hugepages reported on node 1 00:10:39.378 [2024-05-16 20:19:52.175353] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on RDMA/192.168.100.8/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:10:41.280 20:19:54 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:41.280 20:19:54 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:41.280 20:19:54 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:42.655 NVMe io qpair process completion error 00:10:42.655 NVMe io qpair process completion error 00:10:42.655 NVMe io qpair process completion error 00:10:42.655 NVMe io qpair process completion error 00:10:42.655 NVMe io qpair process completion error 00:10:42.655 NVMe io qpair process completion error 00:10:42.655 20:19:55 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:42.655 20:19:55 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:10:42.655 20:19:55 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2968734 00:10:42.655 20:19:55 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:10:42.913 20:19:55 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:10:42.913 20:19:55 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2968734 00:10:42.913 20:19:55 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:10:43.480 Read completed with error (sct=0, sc=8) 00:10:43.480 starting I/O failed: -6 00:10:43.480 Read completed with error (sct=0, sc=8) 00:10:43.480 starting I/O failed: -6 00:10:43.480 Write completed with error (sct=0, sc=8) 00:10:43.480 starting I/O failed: -6 00:10:43.480 Read completed with error (sct=0, sc=8) 00:10:43.480 starting I/O failed: -6 00:10:43.480 Read completed with error (sct=0, sc=8) 00:10:43.480 starting I/O failed: -6 00:10:43.480 Write completed with error (sct=0, sc=8) 00:10:43.480 starting I/O failed: -6 00:10:43.480 Read completed with error (sct=0, sc=8) 00:10:43.480 starting I/O failed: -6 00:10:43.480 Read completed with error (sct=0, sc=8) 00:10:43.480 starting I/O failed: -6 00:10:43.480 Read completed with error (sct=0, sc=8) 00:10:43.480 starting I/O failed: -6 00:10:43.480 Write completed with error (sct=0, sc=8) 00:10:43.480 starting I/O failed: -6 00:10:43.480 Read completed with error (sct=0, sc=8) 00:10:43.480 starting I/O failed: -6 00:10:43.480 Read completed with error (sct=0, sc=8) 00:10:43.480 starting I/O failed: -6 00:10:43.480 Write completed with error (sct=0, sc=8) 00:10:43.480 starting I/O failed: -6 00:10:43.480 Write completed with error (sct=0, sc=8) 00:10:43.480 starting I/O failed: -6 00:10:43.480 Write completed with error (sct=0, sc=8) 00:10:43.480 starting I/O failed: -6 00:10:43.480 Read completed with error (sct=0, sc=8) 00:10:43.480 starting I/O failed: -6 00:10:43.480 Read completed with error (sct=0, sc=8) 00:10:43.480 starting I/O failed: -6 00:10:43.480 Read completed with error (sct=0, sc=8) 00:10:43.480 starting I/O failed: -6 00:10:43.480 Read completed with error (sct=0, sc=8) 00:10:43.480 starting I/O failed: -6 00:10:43.480 Read completed with error (sct=0, sc=8) 00:10:43.480 starting I/O failed: -6 00:10:43.480 Read completed with error (sct=0, sc=8) 00:10:43.480 starting I/O failed: -6 00:10:43.480 Read completed with error (sct=0, sc=8) 00:10:43.480 starting I/O failed: -6 00:10:43.480 Write completed with error (sct=0, sc=8) 00:10:43.480 starting I/O failed: -6 00:10:43.480 Write completed with error (sct=0, sc=8) 00:10:43.480 starting I/O failed: -6 00:10:43.480 Read completed with error (sct=0, sc=8) 00:10:43.480 starting I/O failed: -6 00:10:43.480 Read completed with error (sct=0, sc=8) 00:10:43.480 starting I/O failed: -6 00:10:43.480 Read completed with error (sct=0, sc=8) 00:10:43.480 starting I/O failed: -6 00:10:43.480 Read completed with error (sct=0, sc=8) 00:10:43.480 starting I/O failed: -6 00:10:43.480 Read completed with error (sct=0, sc=8) 00:10:43.480 starting I/O failed: -6 00:10:43.480 Read completed with error (sct=0, sc=8) 00:10:43.480 starting I/O failed: -6 00:10:43.480 Read completed with error (sct=0, sc=8) 00:10:43.480 starting I/O failed: -6 00:10:43.480 Write completed with error (sct=0, sc=8) 00:10:43.480 starting I/O failed: -6 00:10:43.480 Read completed with error (sct=0, sc=8) 00:10:43.480 starting I/O failed: -6 00:10:43.480 Read completed with error (sct=0, sc=8) 00:10:43.480 starting I/O failed: -6 00:10:43.480 Write completed with error (sct=0, sc=8) 00:10:43.480 starting I/O failed: -6 00:10:43.480 Write completed with error (sct=0, sc=8) 00:10:43.480 starting I/O failed: -6 00:10:43.480 Write completed with error (sct=0, sc=8) 00:10:43.480 starting I/O failed: -6 00:10:43.480 Read completed with error (sct=0, sc=8) 00:10:43.480 starting I/O failed: -6 00:10:43.480 Read completed with error (sct=0, sc=8) 00:10:43.480 starting I/O failed: -6 00:10:43.480 Read completed with error (sct=0, sc=8) 00:10:43.480 starting I/O failed: -6 00:10:43.480 Read completed with error (sct=0, sc=8) 00:10:43.480 starting I/O failed: -6 00:10:43.480 Read completed with error (sct=0, sc=8) 00:10:43.480 starting I/O failed: -6 00:10:43.480 Read completed with error (sct=0, sc=8) 00:10:43.480 starting I/O failed: -6 00:10:43.480 Write completed with error (sct=0, sc=8) 00:10:43.480 starting I/O failed: -6 00:10:43.480 Read completed with error (sct=0, sc=8) 00:10:43.480 starting I/O failed: -6 00:10:43.480 Write completed with error (sct=0, sc=8) 00:10:43.480 starting I/O failed: -6 00:10:43.480 Read completed with error (sct=0, sc=8) 00:10:43.480 starting I/O failed: -6 00:10:43.480 Read completed with error (sct=0, sc=8) 00:10:43.480 starting I/O failed: -6 00:10:43.480 Write completed with error (sct=0, sc=8) 00:10:43.480 Write completed with error (sct=0, sc=8) 00:10:43.480 Read completed with error (sct=0, sc=8) 00:10:43.480 Write completed with error (sct=0, sc=8) 00:10:43.480 Read completed with error (sct=0, sc=8) 00:10:43.480 Read completed with error (sct=0, sc=8) 00:10:43.480 Read completed with error (sct=0, sc=8) 00:10:43.480 Read completed with error (sct=0, sc=8) 00:10:43.480 Read completed with error (sct=0, sc=8) 00:10:43.480 Read completed with error (sct=0, sc=8) 00:10:43.480 Read completed with error (sct=0, sc=8) 00:10:43.480 Write completed with error (sct=0, sc=8) 00:10:43.480 Read completed with error (sct=0, sc=8) 00:10:43.480 Write completed with error (sct=0, sc=8) 00:10:43.480 Read completed with error (sct=0, sc=8) 00:10:43.480 Write completed with error (sct=0, sc=8) 00:10:43.480 Read completed with error (sct=0, sc=8) 00:10:43.480 Write completed with error (sct=0, sc=8) 00:10:43.480 Read completed with error (sct=0, sc=8) 00:10:43.480 Read completed with error (sct=0, sc=8) 00:10:43.480 Write completed with error (sct=0, sc=8) 00:10:43.480 Read completed with error (sct=0, sc=8) 00:10:43.480 Read completed with error (sct=0, sc=8) 00:10:43.480 Read completed with error (sct=0, sc=8) 00:10:43.480 Read completed with error (sct=0, sc=8) 00:10:43.480 Read completed with error (sct=0, sc=8) 00:10:43.480 Write completed with error (sct=0, sc=8) 00:10:43.480 Read completed with error (sct=0, sc=8) 00:10:43.480 Read completed with error (sct=0, sc=8) 00:10:43.480 Read completed with error (sct=0, sc=8) 00:10:43.480 Write completed with error (sct=0, sc=8) 00:10:43.480 Write completed with error (sct=0, sc=8) 00:10:43.480 Write completed with error (sct=0, sc=8) 00:10:43.480 Read completed with error (sct=0, sc=8) 00:10:43.480 Read completed with error (sct=0, sc=8) 00:10:43.480 Read completed with error (sct=0, sc=8) 00:10:43.480 Write completed with error (sct=0, sc=8) 00:10:43.480 Read completed with error (sct=0, sc=8) 00:10:43.480 Read completed with error (sct=0, sc=8) 00:10:43.480 Read completed with error (sct=0, sc=8) 00:10:43.480 Write completed with error (sct=0, sc=8) 00:10:43.480 Read completed with error (sct=0, sc=8) 00:10:43.480 Write completed with error (sct=0, sc=8) 00:10:43.480 Read completed with error (sct=0, sc=8) 00:10:43.480 Read completed with error (sct=0, sc=8) 00:10:43.480 Read completed with error (sct=0, sc=8) 00:10:43.480 Read completed with error (sct=0, sc=8) 00:10:43.480 Read completed with error (sct=0, sc=8) 00:10:43.480 Read completed with error (sct=0, sc=8) 00:10:43.480 starting I/O failed: -6 00:10:43.480 Read completed with error (sct=0, sc=8) 00:10:43.480 starting I/O failed: -6 00:10:43.480 Write completed with error (sct=0, sc=8) 00:10:43.480 starting I/O failed: -6 00:10:43.480 Read completed with error (sct=0, sc=8) 00:10:43.480 starting I/O failed: -6 00:10:43.480 Write completed with error (sct=0, sc=8) 00:10:43.480 starting I/O failed: -6 00:10:43.481 Read completed with error (sct=0, sc=8) 00:10:43.481 starting I/O failed: -6 00:10:43.481 Read completed with error (sct=0, sc=8) 00:10:43.481 starting I/O failed: -6 00:10:43.481 Read completed with error (sct=0, sc=8) 00:10:43.481 starting I/O failed: -6 00:10:43.481 Write completed with error (sct=0, sc=8) 00:10:43.481 starting I/O failed: -6 00:10:43.481 Write completed with error (sct=0, sc=8) 00:10:43.481 starting I/O failed: -6 00:10:43.481 Read completed with error (sct=0, sc=8) 00:10:43.481 starting I/O failed: -6 00:10:43.481 Read completed with error (sct=0, sc=8) 00:10:43.481 starting I/O failed: -6 00:10:43.481 Write completed with error (sct=0, sc=8) 00:10:43.481 starting I/O failed: -6 00:10:43.481 Read completed with error (sct=0, sc=8) 00:10:43.481 starting I/O failed: -6 00:10:43.481 Read completed with error (sct=0, sc=8) 00:10:43.481 starting I/O failed: -6 00:10:43.481 Read completed with error (sct=0, sc=8) 00:10:43.481 starting I/O failed: -6 00:10:43.481 Read completed with error (sct=0, sc=8) 00:10:43.481 starting I/O failed: -6 00:10:43.481 Read completed with error (sct=0, sc=8) 00:10:43.481 starting I/O failed: -6 00:10:43.481 Read completed with error (sct=0, sc=8) 00:10:43.481 starting I/O failed: -6 00:10:43.481 Read completed with error (sct=0, sc=8) 00:10:43.481 starting I/O failed: -6 00:10:43.481 Read completed with error (sct=0, sc=8) 00:10:43.481 starting I/O failed: -6 00:10:43.481 Write completed with error (sct=0, sc=8) 00:10:43.481 starting I/O failed: -6 00:10:43.481 Read completed with error (sct=0, sc=8) 00:10:43.481 starting I/O failed: -6 00:10:43.481 Write completed with error (sct=0, sc=8) 00:10:43.481 starting I/O failed: -6 00:10:43.481 Read completed with error (sct=0, sc=8) 00:10:43.481 starting I/O failed: -6 00:10:43.481 Read completed with error (sct=0, sc=8) 00:10:43.481 starting I/O failed: -6 00:10:43.481 Read completed with error (sct=0, sc=8) 00:10:43.481 starting I/O failed: -6 00:10:43.481 Read completed with error (sct=0, sc=8) 00:10:43.481 starting I/O failed: -6 00:10:43.481 Write completed with error (sct=0, sc=8) 00:10:43.481 starting I/O failed: -6 00:10:43.481 Write completed with error (sct=0, sc=8) 00:10:43.481 starting I/O failed: -6 00:10:43.481 Write completed with error (sct=0, sc=8) 00:10:43.481 starting I/O failed: -6 00:10:43.481 Write completed with error (sct=0, sc=8) 00:10:43.481 starting I/O failed: -6 00:10:43.481 Write completed with error (sct=0, sc=8) 00:10:43.481 starting I/O failed: -6 00:10:43.481 Read completed with error (sct=0, sc=8) 00:10:43.481 starting I/O failed: -6 00:10:43.481 Write completed with error (sct=0, sc=8) 00:10:43.481 starting I/O failed: -6 00:10:43.481 Read completed with error (sct=0, sc=8) 00:10:43.481 starting I/O failed: -6 00:10:43.481 Read completed with error (sct=0, sc=8) 00:10:43.481 starting I/O failed: -6 00:10:43.481 Read completed with error (sct=0, sc=8) 00:10:43.481 starting I/O failed: -6 00:10:43.481 Write completed with error (sct=0, sc=8) 00:10:43.481 starting I/O failed: -6 00:10:43.481 Read completed with error (sct=0, sc=8) 00:10:43.481 starting I/O failed: -6 00:10:43.481 Read completed with error (sct=0, sc=8) 00:10:43.481 starting I/O failed: -6 00:10:43.481 Read completed with error (sct=0, sc=8) 00:10:43.481 starting I/O failed: -6 00:10:43.481 Read completed with error (sct=0, sc=8) 00:10:43.481 starting I/O failed: -6 00:10:43.481 Read completed with error (sct=0, sc=8) 00:10:43.481 starting I/O failed: -6 00:10:43.481 Write completed with error (sct=0, sc=8) 00:10:43.481 starting I/O failed: -6 00:10:43.481 Read completed with error (sct=0, sc=8) 00:10:43.481 starting I/O failed: -6 00:10:43.481 Write completed with error (sct=0, sc=8) 00:10:43.481 starting I/O failed: -6 00:10:43.481 Read completed with error (sct=0, sc=8) 00:10:43.481 starting I/O failed: -6 00:10:43.481 Read completed with error (sct=0, sc=8) 00:10:43.481 Read completed with error (sct=0, sc=8) 00:10:43.481 Read completed with error (sct=0, sc=8) 00:10:43.481 Read completed with error (sct=0, sc=8) 00:10:43.481 Write completed with error (sct=0, sc=8) 00:10:43.481 Read completed with error (sct=0, sc=8) 00:10:43.481 Write completed with error (sct=0, sc=8) 00:10:43.481 Read completed with error (sct=0, sc=8) 00:10:43.481 Write completed with error (sct=0, sc=8) 00:10:43.481 Read completed with error (sct=0, sc=8) 00:10:43.481 Write completed with error (sct=0, sc=8) 00:10:43.481 Write completed with error (sct=0, sc=8) 00:10:43.481 Write completed with error (sct=0, sc=8) 00:10:43.481 Read completed with error (sct=0, sc=8) 00:10:43.481 Write completed with error (sct=0, sc=8) 00:10:43.481 Read completed with error (sct=0, sc=8) 00:10:43.481 Read completed with error (sct=0, sc=8) 00:10:43.481 Read completed with error (sct=0, sc=8) 00:10:43.481 Read completed with error (sct=0, sc=8) 00:10:43.481 Write completed with error (sct=0, sc=8) 00:10:43.481 Read completed with error (sct=0, sc=8) 00:10:43.481 Read completed with error (sct=0, sc=8) 00:10:43.481 Read completed with error (sct=0, sc=8) 00:10:43.481 Write completed with error (sct=0, sc=8) 00:10:43.481 Read completed with error (sct=0, sc=8) 00:10:43.481 Read completed with error (sct=0, sc=8) 00:10:43.481 Read completed with error (sct=0, sc=8) 00:10:43.481 Read completed with error (sct=0, sc=8) 00:10:43.481 Read completed with error (sct=0, sc=8) 00:10:43.481 Write completed with error (sct=0, sc=8) 00:10:43.481 Read completed with error (sct=0, sc=8) 00:10:43.481 Read completed with error (sct=0, sc=8) 00:10:43.481 Write completed with error (sct=0, sc=8) 00:10:43.481 Write completed with error (sct=0, sc=8) 00:10:43.481 Read completed with error (sct=0, sc=8) 00:10:43.481 Read completed with error (sct=0, sc=8) 00:10:43.481 Read completed with error (sct=0, sc=8) 00:10:43.481 Read completed with error (sct=0, sc=8) 00:10:43.481 Read completed with error (sct=0, sc=8) 00:10:43.481 Read completed with error (sct=0, sc=8) 00:10:43.481 Read completed with error (sct=0, sc=8) 00:10:43.481 Write completed with error (sct=0, sc=8) 00:10:43.481 Read completed with error (sct=0, sc=8) 00:10:43.481 Read completed with error (sct=0, sc=8) 00:10:43.481 Read completed with error (sct=0, sc=8) 00:10:43.481 Write completed with error (sct=0, sc=8) 00:10:43.481 Read completed with error (sct=0, sc=8) 00:10:43.481 Read completed with error (sct=0, sc=8) 00:10:43.481 Read completed with error (sct=0, sc=8) 00:10:43.481 Read completed with error (sct=0, sc=8) 00:10:43.481 Read completed with error (sct=0, sc=8) 00:10:43.481 Read completed with error (sct=0, sc=8) 00:10:43.481 Read completed with error (sct=0, sc=8) 00:10:43.481 Read completed with error (sct=0, sc=8) 00:10:43.481 Read completed with error (sct=0, sc=8) 00:10:43.481 Read completed with error (sct=0, sc=8) 00:10:43.481 Read completed with error (sct=0, sc=8) 00:10:43.481 Read completed with error (sct=0, sc=8) 00:10:43.481 Read completed with error (sct=0, sc=8) 00:10:43.481 Read completed with error (sct=0, sc=8) 00:10:43.481 Read completed with error (sct=0, sc=8) 00:10:43.481 Write completed with error (sct=0, sc=8) 00:10:43.481 Read completed with error (sct=0, sc=8) 00:10:43.481 Write completed with error (sct=0, sc=8) 00:10:43.481 Read completed with error (sct=0, sc=8) 00:10:43.481 Write completed with error (sct=0, sc=8) 00:10:43.481 Read completed with error (sct=0, sc=8) 00:10:43.481 Write completed with error (sct=0, sc=8) 00:10:43.481 Read completed with error (sct=0, sc=8) 00:10:43.481 Read completed with error (sct=0, sc=8) 00:10:43.481 Read completed with error (sct=0, sc=8) 00:10:43.481 Read completed with error (sct=0, sc=8) 00:10:43.481 Read completed with error (sct=0, sc=8) 00:10:43.481 Write completed with error (sct=0, sc=8) 00:10:43.481 Read completed with error (sct=0, sc=8) 00:10:43.481 Read completed with error (sct=0, sc=8) 00:10:43.481 Read completed with error (sct=0, sc=8) 00:10:43.481 Read completed with error (sct=0, sc=8) 00:10:43.481 Read completed with error (sct=0, sc=8) 00:10:43.481 Write completed with error (sct=0, sc=8) 00:10:43.481 Read completed with error (sct=0, sc=8) 00:10:43.481 Read completed with error (sct=0, sc=8) 00:10:43.481 Read completed with error (sct=0, sc=8) 00:10:43.481 Write completed with error (sct=0, sc=8) 00:10:43.481 Write completed with error (sct=0, sc=8) 00:10:43.481 Read completed with error (sct=0, sc=8) 00:10:43.481 Write completed with error (sct=0, sc=8) 00:10:43.481 Read completed with error (sct=0, sc=8) 00:10:43.481 Read completed with error (sct=0, sc=8) 00:10:43.481 Read completed with error (sct=0, sc=8) 00:10:43.481 Read completed with error (sct=0, sc=8) 00:10:43.481 Write completed with error (sct=0, sc=8) 00:10:43.481 Write completed with error (sct=0, sc=8) 00:10:43.481 Read completed with error (sct=0, sc=8) 00:10:43.481 Write completed with error (sct=0, sc=8) 00:10:43.481 Write completed with error (sct=0, sc=8) 00:10:43.481 Write completed with error (sct=0, sc=8) 00:10:43.481 Write completed with error (sct=0, sc=8) 00:10:43.481 Write completed with error (sct=0, sc=8) 00:10:43.481 Read completed with error (sct=0, sc=8) 00:10:43.481 Read completed with error (sct=0, sc=8) 00:10:43.481 Write completed with error (sct=0, sc=8) 00:10:43.481 Write completed with error (sct=0, sc=8) 00:10:43.481 Write completed with error (sct=0, sc=8) 00:10:43.481 Read completed with error (sct=0, sc=8) 00:10:43.481 Write completed with error (sct=0, sc=8) 00:10:43.481 Read completed with error (sct=0, sc=8) 00:10:43.481 Read completed with error (sct=0, sc=8) 00:10:43.481 Read completed with error (sct=0, sc=8) 00:10:43.481 Read completed with error (sct=0, sc=8) 00:10:43.481 Write completed with error (sct=0, sc=8) 00:10:43.481 Read completed with error (sct=0, sc=8) 00:10:43.481 Initializing NVMe Controllers 00:10:43.481 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:10:43.481 Controller IO queue size 128, less than required. 00:10:43.481 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:10:43.481 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:10:43.481 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:10:43.481 Initialization complete. Launching workers. 00:10:43.481 ======================================================== 00:10:43.481 Latency(us) 00:10:43.481 Device Information : IOPS MiB/s Average min max 00:10:43.481 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 80.47 0.04 1593926.73 1000085.70 2976273.86 00:10:43.481 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 80.47 0.04 1595356.06 1000367.37 2977428.16 00:10:43.482 ======================================================== 00:10:43.482 Total : 160.95 0.08 1594641.39 1000085.70 2977428.16 00:10:43.482 00:10:43.482 [2024-05-16 20:19:56.259497] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:10:43.482 20:19:56 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:10:43.482 20:19:56 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2968734 00:10:43.482 20:19:56 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:10:43.482 [2024-05-16 20:19:56.273821] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:10:43.482 [2024-05-16 20:19:56.273837] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:10:43.482 /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:10:44.049 20:19:56 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:10:44.049 20:19:56 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2968734 00:10:44.049 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (2968734) - No such process 00:10:44.049 20:19:56 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 2968734 00:10:44.049 20:19:56 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@648 -- # local es=0 00:10:44.049 20:19:56 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # valid_exec_arg wait 2968734 00:10:44.049 20:19:56 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@636 -- # local arg=wait 00:10:44.049 20:19:56 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:44.049 20:19:56 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # type -t wait 00:10:44.049 20:19:56 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:44.049 20:19:56 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # wait 2968734 00:10:44.049 20:19:56 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # es=1 00:10:44.049 20:19:56 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:10:44.049 20:19:56 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:10:44.049 20:19:56 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:10:44.049 20:19:56 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:44.049 20:19:56 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:44.049 20:19:56 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:44.049 20:19:56 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:44.049 20:19:56 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:10:44.049 20:19:56 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:44.049 20:19:56 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:44.049 [2024-05-16 20:19:56.784767] rdma.c:3032:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:10:44.049 20:19:56 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:44.049 20:19:56 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:44.049 20:19:56 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:44.049 20:19:56 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:44.049 20:19:56 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:44.049 20:19:56 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=2969520 00:10:44.049 20:19:56 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:10:44.049 20:19:56 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:10:44.049 20:19:56 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2969520 00:10:44.049 20:19:56 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:44.049 EAL: No free 2048 kB hugepages reported on node 1 00:10:44.049 [2024-05-16 20:19:56.860028] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on RDMA/192.168.100.8/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:10:44.616 20:19:57 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:44.616 20:19:57 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2969520 00:10:44.616 20:19:57 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:44.874 20:19:57 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:44.874 20:19:57 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2969520 00:10:44.874 20:19:57 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:45.441 20:19:58 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:45.441 20:19:58 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2969520 00:10:45.441 20:19:58 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:46.006 20:19:58 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:46.006 20:19:58 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2969520 00:10:46.006 20:19:58 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:46.571 20:19:59 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:46.572 20:19:59 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2969520 00:10:46.572 20:19:59 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:47.138 20:19:59 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:47.138 20:19:59 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2969520 00:10:47.138 20:19:59 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:47.395 20:20:00 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:47.395 20:20:00 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2969520 00:10:47.395 20:20:00 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:47.962 20:20:00 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:47.962 20:20:00 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2969520 00:10:47.962 20:20:00 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:48.541 20:20:01 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:48.541 20:20:01 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2969520 00:10:48.541 20:20:01 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:49.107 20:20:01 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:49.107 20:20:01 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2969520 00:10:49.107 20:20:01 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:49.365 20:20:02 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:49.365 20:20:02 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2969520 00:10:49.365 20:20:02 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:49.931 20:20:02 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:49.931 20:20:02 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2969520 00:10:49.931 20:20:02 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:50.516 20:20:03 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:50.516 20:20:03 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2969520 00:10:50.516 20:20:03 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:51.082 20:20:03 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:51.082 20:20:03 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2969520 00:10:51.082 20:20:03 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:51.082 Initializing NVMe Controllers 00:10:51.082 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:10:51.082 Controller IO queue size 128, less than required. 00:10:51.082 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:10:51.082 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:10:51.082 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:10:51.082 Initialization complete. Launching workers. 00:10:51.082 ======================================================== 00:10:51.082 Latency(us) 00:10:51.082 Device Information : IOPS MiB/s Average min max 00:10:51.082 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1001245.86 1000065.22 1003950.11 00:10:51.082 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1002466.94 1000098.38 1006050.90 00:10:51.082 ======================================================== 00:10:51.082 Total : 256.00 0.12 1001856.40 1000065.22 1006050.90 00:10:51.082 00:10:51.692 20:20:04 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:51.692 20:20:04 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2969520 00:10:51.692 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (2969520) - No such process 00:10:51.692 20:20:04 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 2969520 00:10:51.692 20:20:04 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:10:51.692 20:20:04 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:10:51.692 20:20:04 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:51.692 20:20:04 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:10:51.692 20:20:04 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:10:51.692 20:20:04 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:10:51.692 20:20:04 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:10:51.692 20:20:04 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:51.692 20:20:04 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:10:51.692 rmmod nvme_rdma 00:10:51.692 rmmod nvme_fabrics 00:10:51.692 20:20:04 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:51.692 20:20:04 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:10:51.692 20:20:04 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:10:51.692 20:20:04 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 2968487 ']' 00:10:51.692 20:20:04 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 2968487 00:10:51.692 20:20:04 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@946 -- # '[' -z 2968487 ']' 00:10:51.692 20:20:04 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@950 -- # kill -0 2968487 00:10:51.692 20:20:04 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@951 -- # uname 00:10:51.692 20:20:04 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:10:51.692 20:20:04 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2968487 00:10:51.692 20:20:04 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:10:51.692 20:20:04 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:10:51.692 20:20:04 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2968487' 00:10:51.692 killing process with pid 2968487 00:10:51.692 20:20:04 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@965 -- # kill 2968487 00:10:51.692 [2024-05-16 20:20:04.457309] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:10:51.692 20:20:04 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@970 -- # wait 2968487 00:10:51.692 [2024-05-16 20:20:04.506763] rdma.c:2885:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:10:51.692 20:20:04 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:51.692 20:20:04 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:10:51.692 00:10:51.692 real 0m19.833s 00:10:51.692 user 0m49.930s 00:10:51.692 sys 0m5.664s 00:10:51.952 20:20:04 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:51.952 20:20:04 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:51.952 ************************************ 00:10:51.952 END TEST nvmf_delete_subsystem 00:10:51.952 ************************************ 00:10:51.952 20:20:04 nvmf_rdma -- nvmf/nvmf.sh@36 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=rdma 00:10:51.952 20:20:04 nvmf_rdma -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:10:51.952 20:20:04 nvmf_rdma -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:51.952 20:20:04 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:10:51.952 ************************************ 00:10:51.952 START TEST nvmf_ns_masking 00:10:51.952 ************************************ 00:10:51.952 20:20:04 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1121 -- # test/nvmf/target/ns_masking.sh --transport=rdma 00:10:51.952 * Looking for test storage... 00:10:51.952 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:10:51.952 20:20:04 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:10:51.952 20:20:04 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:10:51.952 20:20:04 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:51.952 20:20:04 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:51.952 20:20:04 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:51.952 20:20:04 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:51.952 20:20:04 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:51.952 20:20:04 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:51.952 20:20:04 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:51.952 20:20:04 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:51.952 20:20:04 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:51.952 20:20:04 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:51.952 20:20:04 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:10:51.952 20:20:04 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:10:51.952 20:20:04 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:51.952 20:20:04 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:51.952 20:20:04 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:51.952 20:20:04 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:51.952 20:20:04 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:10:51.952 20:20:04 nvmf_rdma.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:51.952 20:20:04 nvmf_rdma.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:51.952 20:20:04 nvmf_rdma.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:51.953 20:20:04 nvmf_rdma.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:51.953 20:20:04 nvmf_rdma.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:51.953 20:20:04 nvmf_rdma.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:51.953 20:20:04 nvmf_rdma.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:10:51.953 20:20:04 nvmf_rdma.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:51.953 20:20:04 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:10:51.953 20:20:04 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:51.953 20:20:04 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:51.953 20:20:04 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:51.953 20:20:04 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:51.953 20:20:04 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:51.953 20:20:04 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:51.953 20:20:04 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:51.953 20:20:04 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:51.953 20:20:04 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:10:51.953 20:20:04 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@11 -- # loops=5 00:10:51.953 20:20:04 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@13 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:10:51.953 20:20:04 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@14 -- # HOSTNQN=nqn.2016-06.io.spdk:host1 00:10:51.953 20:20:04 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@15 -- # uuidgen 00:10:51.953 20:20:04 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@15 -- # HOSTID=3b577ddd-e56f-4058-84a1-b03900d4a60e 00:10:51.953 20:20:04 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvmftestinit 00:10:51.953 20:20:04 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:10:51.953 20:20:04 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:51.953 20:20:04 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:51.953 20:20:04 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:51.953 20:20:04 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:51.953 20:20:04 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:51.953 20:20:04 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:51.953 20:20:04 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:51.953 20:20:04 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:51.953 20:20:04 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:51.953 20:20:04 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@285 -- # xtrace_disable 00:10:51.953 20:20:04 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:10:58.518 20:20:10 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:58.518 20:20:10 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@291 -- # pci_devs=() 00:10:58.518 20:20:10 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:58.518 20:20:10 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:58.518 20:20:10 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:58.518 20:20:10 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:58.518 20:20:10 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:58.518 20:20:10 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@295 -- # net_devs=() 00:10:58.518 20:20:10 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:58.518 20:20:10 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@296 -- # e810=() 00:10:58.518 20:20:10 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@296 -- # local -ga e810 00:10:58.518 20:20:10 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@297 -- # x722=() 00:10:58.518 20:20:10 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@297 -- # local -ga x722 00:10:58.518 20:20:10 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@298 -- # mlx=() 00:10:58.518 20:20:10 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@298 -- # local -ga mlx 00:10:58.518 20:20:10 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:58.518 20:20:10 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:58.518 20:20:10 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:58.518 20:20:10 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:58.518 20:20:10 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:58.518 20:20:10 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:58.518 20:20:10 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:58.518 20:20:10 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:58.518 20:20:10 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:58.518 20:20:10 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:58.518 20:20:10 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:58.518 20:20:10 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:58.518 20:20:10 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:10:58.518 20:20:10 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:10:58.518 20:20:10 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:10:58.518 20:20:10 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:10:58.518 20:20:10 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:10:58.518 20:20:10 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:58.518 20:20:10 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:58.518 20:20:10 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:10:58.518 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:10:58.518 20:20:10 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:10:58.518 20:20:10 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:10:58.518 20:20:10 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:10:58.518 20:20:10 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:10:58.518 20:20:10 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:10:58.518 20:20:10 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:10:58.518 20:20:10 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:58.518 20:20:10 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:10:58.518 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:10:58.518 20:20:10 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:10:58.518 20:20:10 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:10:58.518 20:20:10 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:10:58.518 20:20:10 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:10:58.518 20:20:10 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:10:58.518 20:20:10 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:10:58.518 20:20:10 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:58.518 20:20:10 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:10:58.519 20:20:10 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:58.519 20:20:10 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:58.519 20:20:10 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:10:58.519 20:20:10 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:58.519 20:20:10 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:58.519 20:20:10 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:10:58.519 Found net devices under 0000:da:00.0: mlx_0_0 00:10:58.519 20:20:10 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:58.519 20:20:10 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:58.519 20:20:10 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:58.519 20:20:10 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:10:58.519 20:20:10 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:58.519 20:20:10 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:58.519 20:20:10 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:10:58.519 Found net devices under 0000:da:00.1: mlx_0_1 00:10:58.519 20:20:10 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:58.519 20:20:10 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:58.519 20:20:10 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@414 -- # is_hw=yes 00:10:58.519 20:20:10 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:58.519 20:20:10 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:10:58.519 20:20:10 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:10:58.519 20:20:10 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@420 -- # rdma_device_init 00:10:58.519 20:20:10 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:10:58.519 20:20:10 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@58 -- # uname 00:10:58.519 20:20:10 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:10:58.519 20:20:10 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@62 -- # modprobe ib_cm 00:10:58.519 20:20:10 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@63 -- # modprobe ib_core 00:10:58.519 20:20:10 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@64 -- # modprobe ib_umad 00:10:58.519 20:20:10 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:10:58.519 20:20:10 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@66 -- # modprobe iw_cm 00:10:58.519 20:20:10 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:10:58.519 20:20:10 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:10:58.519 20:20:10 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@502 -- # allocate_nic_ips 00:10:58.519 20:20:10 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:10:58.519 20:20:10 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@73 -- # get_rdma_if_list 00:10:58.519 20:20:10 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:58.519 20:20:10 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:10:58.519 20:20:10 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:10:58.519 20:20:10 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:58.519 20:20:10 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:10:58.519 20:20:10 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:58.519 20:20:10 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:58.519 20:20:10 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:58.519 20:20:10 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@104 -- # echo mlx_0_0 00:10:58.519 20:20:10 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@105 -- # continue 2 00:10:58.519 20:20:10 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:58.519 20:20:10 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:58.519 20:20:10 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:58.519 20:20:10 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:58.519 20:20:10 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:58.519 20:20:10 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@104 -- # echo mlx_0_1 00:10:58.519 20:20:10 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@105 -- # continue 2 00:10:58.519 20:20:10 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:10:58.519 20:20:10 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:10:58.519 20:20:10 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:10:58.519 20:20:10 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:10:58.519 20:20:10 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:58.519 20:20:10 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:58.519 20:20:10 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:10:58.519 20:20:10 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:10:58.519 20:20:10 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:10:58.519 258: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:58.519 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:10:58.519 altname enp218s0f0np0 00:10:58.519 altname ens818f0np0 00:10:58.519 inet 192.168.100.8/24 scope global mlx_0_0 00:10:58.519 valid_lft forever preferred_lft forever 00:10:58.519 20:20:10 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:10:58.519 20:20:10 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:10:58.519 20:20:10 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:10:58.519 20:20:10 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:10:58.519 20:20:10 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:58.519 20:20:10 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:58.519 20:20:10 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:10:58.519 20:20:10 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:10:58.519 20:20:10 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:10:58.519 259: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:58.519 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:10:58.519 altname enp218s0f1np1 00:10:58.519 altname ens818f1np1 00:10:58.519 inet 192.168.100.9/24 scope global mlx_0_1 00:10:58.519 valid_lft forever preferred_lft forever 00:10:58.519 20:20:10 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@422 -- # return 0 00:10:58.519 20:20:10 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:58.519 20:20:10 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:10:58.519 20:20:10 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:10:58.519 20:20:10 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:10:58.519 20:20:10 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@86 -- # get_rdma_if_list 00:10:58.519 20:20:10 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:58.519 20:20:10 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:10:58.519 20:20:10 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:10:58.519 20:20:10 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:58.519 20:20:10 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:10:58.519 20:20:10 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:58.519 20:20:10 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:58.519 20:20:10 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:58.519 20:20:10 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@104 -- # echo mlx_0_0 00:10:58.519 20:20:10 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@105 -- # continue 2 00:10:58.519 20:20:10 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:58.519 20:20:10 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:58.519 20:20:10 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:58.519 20:20:10 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:58.519 20:20:10 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:58.519 20:20:10 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@104 -- # echo mlx_0_1 00:10:58.519 20:20:10 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@105 -- # continue 2 00:10:58.519 20:20:10 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:10:58.519 20:20:10 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:10:58.519 20:20:10 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:10:58.519 20:20:10 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:10:58.519 20:20:10 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:58.519 20:20:10 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:58.519 20:20:10 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:10:58.519 20:20:10 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:10:58.519 20:20:10 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:10:58.519 20:20:10 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:10:58.519 20:20:10 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:58.519 20:20:10 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:58.519 20:20:10 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:10:58.519 192.168.100.9' 00:10:58.519 20:20:10 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:10:58.519 192.168.100.9' 00:10:58.519 20:20:10 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@457 -- # head -n 1 00:10:58.519 20:20:10 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:10:58.519 20:20:10 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:10:58.519 192.168.100.9' 00:10:58.519 20:20:10 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@458 -- # tail -n +2 00:10:58.519 20:20:10 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@458 -- # head -n 1 00:10:58.520 20:20:10 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:10:58.520 20:20:10 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:10:58.520 20:20:10 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:10:58.520 20:20:10 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:10:58.520 20:20:10 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:10:58.520 20:20:10 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:10:58.520 20:20:10 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@45 -- # nvmfappstart -m 0xF 00:10:58.520 20:20:10 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:58.520 20:20:10 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@720 -- # xtrace_disable 00:10:58.520 20:20:10 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:10:58.520 20:20:10 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=2974327 00:10:58.520 20:20:10 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 2974327 00:10:58.520 20:20:10 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:58.520 20:20:10 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@827 -- # '[' -z 2974327 ']' 00:10:58.520 20:20:10 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:58.520 20:20:10 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@832 -- # local max_retries=100 00:10:58.520 20:20:10 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:58.520 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:58.520 20:20:10 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@836 -- # xtrace_disable 00:10:58.520 20:20:10 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:10:58.520 [2024-05-16 20:20:10.912954] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:10:58.520 [2024-05-16 20:20:10.913000] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:58.520 EAL: No free 2048 kB hugepages reported on node 1 00:10:58.520 [2024-05-16 20:20:10.974394] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:58.520 [2024-05-16 20:20:11.047717] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:58.520 [2024-05-16 20:20:11.047759] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:58.520 [2024-05-16 20:20:11.047766] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:58.520 [2024-05-16 20:20:11.047772] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:58.520 [2024-05-16 20:20:11.047777] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:58.520 [2024-05-16 20:20:11.047832] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:58.520 [2024-05-16 20:20:11.047929] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:58.520 [2024-05-16 20:20:11.048021] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:58.520 [2024-05-16 20:20:11.048022] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:58.778 20:20:11 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:10:58.778 20:20:11 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@860 -- # return 0 00:10:58.778 20:20:11 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:58.778 20:20:11 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:58.778 20:20:11 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:10:58.778 20:20:11 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:58.778 20:20:11 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:10:59.036 [2024-05-16 20:20:11.929045] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1dfa9b0/0x1dfeea0) succeed. 00:10:59.036 [2024-05-16 20:20:11.939220] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1dfbff0/0x1e40530) succeed. 00:10:59.295 20:20:12 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@49 -- # MALLOC_BDEV_SIZE=64 00:10:59.295 20:20:12 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@50 -- # MALLOC_BLOCK_SIZE=512 00:10:59.295 20:20:12 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:10:59.295 Malloc1 00:10:59.295 20:20:12 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:10:59.553 Malloc2 00:10:59.553 20:20:12 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@56 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:59.812 20:20:12 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:11:00.111 20:20:12 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:11:00.111 [2024-05-16 20:20:12.982239] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:11:00.111 [2024-05-16 20:20:12.982606] rdma.c:3032:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:11:00.111 20:20:13 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@61 -- # connect 00:11:00.111 20:20:13 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@18 -- # nvme connect -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 3b577ddd-e56f-4058-84a1-b03900d4a60e -a 192.168.100.8 -s 4420 -i 4 00:11:00.393 20:20:13 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 00:11:00.393 20:20:13 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1194 -- # local i=0 00:11:00.393 20:20:13 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:11:00.393 20:20:13 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:11:00.393 20:20:13 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # sleep 2 00:11:02.939 20:20:15 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:11:02.939 20:20:15 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:11:02.939 20:20:15 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:11:02.939 20:20:15 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:11:02.939 20:20:15 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:11:02.939 20:20:15 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # return 0 00:11:02.939 20:20:15 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:11:02.939 20:20:15 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:11:02.939 20:20:15 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:11:02.939 20:20:15 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:11:02.939 20:20:15 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@62 -- # ns_is_visible 0x1 00:11:02.939 20:20:15 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:02.939 20:20:15 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:11:02.939 [ 0]:0x1 00:11:02.939 20:20:15 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:02.939 20:20:15 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:02.939 20:20:15 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=f3629a11391f42eb8fd91871d9190a4a 00:11:02.939 20:20:15 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ f3629a11391f42eb8fd91871d9190a4a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:02.939 20:20:15 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@65 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:11:02.939 20:20:15 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@66 -- # ns_is_visible 0x1 00:11:02.939 20:20:15 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:02.939 20:20:15 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:11:02.939 [ 0]:0x1 00:11:02.939 20:20:15 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:02.939 20:20:15 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:02.939 20:20:15 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=f3629a11391f42eb8fd91871d9190a4a 00:11:02.939 20:20:15 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ f3629a11391f42eb8fd91871d9190a4a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:02.939 20:20:15 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@67 -- # ns_is_visible 0x2 00:11:02.939 20:20:15 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:02.939 20:20:15 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:11:02.939 [ 1]:0x2 00:11:02.939 20:20:15 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:02.939 20:20:15 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:02.939 20:20:15 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=e70817b5953c4e5ebd6b73354e6017bb 00:11:02.939 20:20:15 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ e70817b5953c4e5ebd6b73354e6017bb != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:02.939 20:20:15 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@69 -- # disconnect 00:11:02.939 20:20:15 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:03.197 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:03.197 20:20:16 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@73 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:03.456 20:20:16 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@74 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:11:03.456 20:20:16 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@77 -- # connect 1 00:11:03.456 20:20:16 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@18 -- # nvme connect -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 3b577ddd-e56f-4058-84a1-b03900d4a60e -a 192.168.100.8 -s 4420 -i 4 00:11:04.022 20:20:16 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 1 00:11:04.022 20:20:16 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1194 -- # local i=0 00:11:04.022 20:20:16 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:11:04.022 20:20:16 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1196 -- # [[ -n 1 ]] 00:11:04.022 20:20:16 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1197 -- # nvme_device_counter=1 00:11:04.022 20:20:16 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # sleep 2 00:11:05.923 20:20:18 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:11:05.923 20:20:18 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:11:05.923 20:20:18 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:11:05.923 20:20:18 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:11:05.923 20:20:18 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:11:05.923 20:20:18 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # return 0 00:11:05.923 20:20:18 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:11:05.923 20:20:18 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:11:05.923 20:20:18 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:11:05.923 20:20:18 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:11:05.923 20:20:18 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@78 -- # NOT ns_is_visible 0x1 00:11:05.923 20:20:18 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:11:05.923 20:20:18 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:11:05.923 20:20:18 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:11:05.923 20:20:18 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:05.923 20:20:18 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:11:05.923 20:20:18 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:05.923 20:20:18 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:11:05.923 20:20:18 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:05.923 20:20:18 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:11:05.923 20:20:18 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:05.923 20:20:18 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:05.923 20:20:18 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:11:05.923 20:20:18 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:05.923 20:20:18 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:11:05.923 20:20:18 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:05.923 20:20:18 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:05.923 20:20:18 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:05.923 20:20:18 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@79 -- # ns_is_visible 0x2 00:11:05.923 20:20:18 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:05.923 20:20:18 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:11:05.923 [ 0]:0x2 00:11:05.923 20:20:18 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:05.923 20:20:18 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:05.923 20:20:18 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=e70817b5953c4e5ebd6b73354e6017bb 00:11:05.923 20:20:18 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ e70817b5953c4e5ebd6b73354e6017bb != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:05.923 20:20:18 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@82 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:06.182 20:20:19 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@83 -- # ns_is_visible 0x1 00:11:06.182 20:20:19 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:06.182 20:20:19 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:11:06.182 [ 0]:0x1 00:11:06.182 20:20:19 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:06.182 20:20:19 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:06.182 20:20:19 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=f3629a11391f42eb8fd91871d9190a4a 00:11:06.182 20:20:19 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ f3629a11391f42eb8fd91871d9190a4a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:06.182 20:20:19 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@84 -- # ns_is_visible 0x2 00:11:06.182 20:20:19 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:06.182 20:20:19 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:11:06.182 [ 1]:0x2 00:11:06.182 20:20:19 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:06.182 20:20:19 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:06.440 20:20:19 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=e70817b5953c4e5ebd6b73354e6017bb 00:11:06.440 20:20:19 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ e70817b5953c4e5ebd6b73354e6017bb != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:06.440 20:20:19 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@87 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:06.440 20:20:19 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@88 -- # NOT ns_is_visible 0x1 00:11:06.440 20:20:19 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:11:06.440 20:20:19 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:11:06.440 20:20:19 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:11:06.440 20:20:19 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:06.440 20:20:19 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:11:06.440 20:20:19 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:06.440 20:20:19 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:11:06.440 20:20:19 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:06.440 20:20:19 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:11:06.440 20:20:19 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:06.440 20:20:19 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:06.698 20:20:19 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:11:06.698 20:20:19 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:06.698 20:20:19 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:11:06.698 20:20:19 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:06.698 20:20:19 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:06.698 20:20:19 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:06.698 20:20:19 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x2 00:11:06.698 20:20:19 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:06.698 20:20:19 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:11:06.698 [ 0]:0x2 00:11:06.698 20:20:19 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:06.698 20:20:19 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:06.698 20:20:19 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=e70817b5953c4e5ebd6b73354e6017bb 00:11:06.698 20:20:19 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ e70817b5953c4e5ebd6b73354e6017bb != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:06.698 20:20:19 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@91 -- # disconnect 00:11:06.698 20:20:19 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:06.960 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:06.960 20:20:19 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@94 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:07.218 20:20:19 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@95 -- # connect 2 00:11:07.218 20:20:19 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@18 -- # nvme connect -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 3b577ddd-e56f-4058-84a1-b03900d4a60e -a 192.168.100.8 -s 4420 -i 4 00:11:07.478 20:20:20 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 2 00:11:07.478 20:20:20 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1194 -- # local i=0 00:11:07.478 20:20:20 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:11:07.478 20:20:20 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1196 -- # [[ -n 2 ]] 00:11:07.478 20:20:20 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1197 -- # nvme_device_counter=2 00:11:07.478 20:20:20 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # sleep 2 00:11:09.380 20:20:22 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:11:09.380 20:20:22 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:11:09.380 20:20:22 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:11:09.380 20:20:22 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # nvme_devices=2 00:11:09.380 20:20:22 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:11:09.380 20:20:22 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # return 0 00:11:09.380 20:20:22 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:11:09.380 20:20:22 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:11:09.380 20:20:22 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:11:09.380 20:20:22 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:11:09.380 20:20:22 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@96 -- # ns_is_visible 0x1 00:11:09.380 20:20:22 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:09.380 20:20:22 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:11:09.380 [ 0]:0x1 00:11:09.380 20:20:22 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:09.380 20:20:22 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:09.637 20:20:22 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=f3629a11391f42eb8fd91871d9190a4a 00:11:09.637 20:20:22 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ f3629a11391f42eb8fd91871d9190a4a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:09.637 20:20:22 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@97 -- # ns_is_visible 0x2 00:11:09.637 20:20:22 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:09.637 20:20:22 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:11:09.637 [ 1]:0x2 00:11:09.637 20:20:22 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:09.637 20:20:22 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:09.637 20:20:22 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=e70817b5953c4e5ebd6b73354e6017bb 00:11:09.637 20:20:22 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ e70817b5953c4e5ebd6b73354e6017bb != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:09.637 20:20:22 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:09.896 20:20:22 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@101 -- # NOT ns_is_visible 0x1 00:11:09.896 20:20:22 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:11:09.896 20:20:22 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:11:09.896 20:20:22 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:11:09.896 20:20:22 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:09.896 20:20:22 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:11:09.896 20:20:22 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:09.896 20:20:22 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:11:09.896 20:20:22 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:09.896 20:20:22 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:11:09.896 20:20:22 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:09.896 20:20:22 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:09.896 20:20:22 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:11:09.896 20:20:22 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:09.896 20:20:22 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:11:09.896 20:20:22 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:09.896 20:20:22 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:09.896 20:20:22 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:09.896 20:20:22 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x2 00:11:09.896 20:20:22 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:11:09.896 20:20:22 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:09.896 [ 0]:0x2 00:11:09.896 20:20:22 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:09.896 20:20:22 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:09.896 20:20:22 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=e70817b5953c4e5ebd6b73354e6017bb 00:11:09.897 20:20:22 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ e70817b5953c4e5ebd6b73354e6017bb != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:09.897 20:20:22 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@105 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:11:09.897 20:20:22 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:11:09.897 20:20:22 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:11:09.897 20:20:22 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:11:09.897 20:20:22 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:09.897 20:20:22 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:11:09.897 20:20:22 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:09.897 20:20:22 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:11:09.897 20:20:22 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:09.897 20:20:22 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:11:09.897 20:20:22 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:11:09.897 20:20:22 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:11:09.897 [2024-05-16 20:20:22.883282] nvmf_rpc.c:1781:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:11:09.897 request: 00:11:09.897 { 00:11:09.897 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:11:09.897 "nsid": 2, 00:11:09.897 "host": "nqn.2016-06.io.spdk:host1", 00:11:09.897 "method": "nvmf_ns_remove_host", 00:11:09.897 "req_id": 1 00:11:09.897 } 00:11:09.897 Got JSON-RPC error response 00:11:09.897 response: 00:11:09.897 { 00:11:09.897 "code": -32602, 00:11:09.897 "message": "Invalid parameters" 00:11:09.897 } 00:11:10.155 20:20:22 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:11:10.155 20:20:22 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:10.155 20:20:22 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:10.155 20:20:22 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:10.155 20:20:22 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@106 -- # NOT ns_is_visible 0x1 00:11:10.155 20:20:22 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:11:10.155 20:20:22 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:11:10.155 20:20:22 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:11:10.155 20:20:22 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:10.155 20:20:22 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:11:10.155 20:20:22 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:10.155 20:20:22 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:11:10.155 20:20:22 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:10.155 20:20:22 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:11:10.155 20:20:22 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:10.155 20:20:22 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:10.155 20:20:22 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:11:10.155 20:20:22 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:10.155 20:20:22 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:11:10.155 20:20:22 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:10.155 20:20:22 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:10.155 20:20:22 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:10.155 20:20:22 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@107 -- # ns_is_visible 0x2 00:11:10.155 20:20:22 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:10.155 20:20:22 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:11:10.155 [ 0]:0x2 00:11:10.155 20:20:22 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:10.155 20:20:22 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:10.155 20:20:23 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=e70817b5953c4e5ebd6b73354e6017bb 00:11:10.155 20:20:23 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ e70817b5953c4e5ebd6b73354e6017bb != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:10.155 20:20:23 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@108 -- # disconnect 00:11:10.155 20:20:23 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:10.423 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:10.423 20:20:23 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@110 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:10.681 20:20:23 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:11:10.681 20:20:23 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@114 -- # nvmftestfini 00:11:10.681 20:20:23 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:10.681 20:20:23 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:11:10.681 20:20:23 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:11:10.681 20:20:23 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:11:10.681 20:20:23 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:11:10.681 20:20:23 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:10.681 20:20:23 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:11:10.681 rmmod nvme_rdma 00:11:10.681 rmmod nvme_fabrics 00:11:10.681 20:20:23 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:10.681 20:20:23 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:11:10.681 20:20:23 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:11:10.681 20:20:23 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 2974327 ']' 00:11:10.681 20:20:23 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 2974327 00:11:10.681 20:20:23 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@946 -- # '[' -z 2974327 ']' 00:11:10.681 20:20:23 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@950 -- # kill -0 2974327 00:11:10.681 20:20:23 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@951 -- # uname 00:11:10.681 20:20:23 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:11:10.681 20:20:23 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2974327 00:11:10.681 20:20:23 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:11:10.681 20:20:23 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:11:10.681 20:20:23 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2974327' 00:11:10.681 killing process with pid 2974327 00:11:10.681 20:20:23 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@965 -- # kill 2974327 00:11:10.681 [2024-05-16 20:20:23.631178] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:11:10.681 20:20:23 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@970 -- # wait 2974327 00:11:10.939 [2024-05-16 20:20:23.711806] rdma.c:2885:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:11:10.939 20:20:23 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:10.939 20:20:23 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:11:10.939 00:11:10.939 real 0m19.173s 00:11:10.939 user 0m55.991s 00:11:10.939 sys 0m5.763s 00:11:10.939 20:20:23 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:10.939 20:20:23 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:10.939 ************************************ 00:11:10.939 END TEST nvmf_ns_masking 00:11:10.939 ************************************ 00:11:11.197 20:20:23 nvmf_rdma -- nvmf/nvmf.sh@37 -- # [[ 1 -eq 1 ]] 00:11:11.197 20:20:23 nvmf_rdma -- nvmf/nvmf.sh@38 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=rdma 00:11:11.197 20:20:23 nvmf_rdma -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:11:11.197 20:20:23 nvmf_rdma -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:11.197 20:20:23 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:11:11.197 ************************************ 00:11:11.197 START TEST nvmf_nvme_cli 00:11:11.197 ************************************ 00:11:11.197 20:20:23 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=rdma 00:11:11.197 * Looking for test storage... 00:11:11.197 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:11:11.197 20:20:24 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:11:11.197 20:20:24 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:11:11.197 20:20:24 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:11.197 20:20:24 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:11.197 20:20:24 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:11.197 20:20:24 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:11.197 20:20:24 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:11.197 20:20:24 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:11.197 20:20:24 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:11.197 20:20:24 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:11.197 20:20:24 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:11.197 20:20:24 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:11.197 20:20:24 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:11:11.197 20:20:24 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:11:11.198 20:20:24 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:11.198 20:20:24 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:11.198 20:20:24 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:11.198 20:20:24 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:11.198 20:20:24 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:11:11.198 20:20:24 nvmf_rdma.nvmf_nvme_cli -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:11.198 20:20:24 nvmf_rdma.nvmf_nvme_cli -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:11.198 20:20:24 nvmf_rdma.nvmf_nvme_cli -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:11.198 20:20:24 nvmf_rdma.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:11.198 20:20:24 nvmf_rdma.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:11.198 20:20:24 nvmf_rdma.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:11.198 20:20:24 nvmf_rdma.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:11:11.198 20:20:24 nvmf_rdma.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:11.198 20:20:24 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@47 -- # : 0 00:11:11.198 20:20:24 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:11.198 20:20:24 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:11.198 20:20:24 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:11.198 20:20:24 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:11.198 20:20:24 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:11.198 20:20:24 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:11.198 20:20:24 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:11.198 20:20:24 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:11.198 20:20:24 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:11.198 20:20:24 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:11.198 20:20:24 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:11:11.198 20:20:24 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:11:11.198 20:20:24 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:11:11.198 20:20:24 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:11.198 20:20:24 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:11.198 20:20:24 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:11.198 20:20:24 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:11.198 20:20:24 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:11.198 20:20:24 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:11.198 20:20:24 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:11.198 20:20:24 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:11.198 20:20:24 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:11.198 20:20:24 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@285 -- # xtrace_disable 00:11:11.198 20:20:24 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:17.771 20:20:30 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:17.771 20:20:30 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@291 -- # pci_devs=() 00:11:17.771 20:20:30 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:17.771 20:20:30 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:17.771 20:20:30 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:17.771 20:20:30 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:17.771 20:20:30 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:17.771 20:20:30 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@295 -- # net_devs=() 00:11:17.771 20:20:30 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:17.771 20:20:30 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@296 -- # e810=() 00:11:17.771 20:20:30 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@296 -- # local -ga e810 00:11:17.771 20:20:30 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@297 -- # x722=() 00:11:17.772 20:20:30 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@297 -- # local -ga x722 00:11:17.772 20:20:30 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@298 -- # mlx=() 00:11:17.772 20:20:30 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@298 -- # local -ga mlx 00:11:17.772 20:20:30 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:17.772 20:20:30 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:17.772 20:20:30 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:17.772 20:20:30 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:17.772 20:20:30 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:17.772 20:20:30 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:17.772 20:20:30 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:17.772 20:20:30 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:17.772 20:20:30 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:17.772 20:20:30 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:17.772 20:20:30 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:17.772 20:20:30 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:17.772 20:20:30 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:11:17.772 20:20:30 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:11:17.772 20:20:30 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:11:17.772 20:20:30 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:11:17.772 20:20:30 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:11:17.772 20:20:30 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:17.772 20:20:30 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:17.772 20:20:30 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:11:17.772 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:11:17.772 20:20:30 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:11:17.772 20:20:30 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:11:17.772 20:20:30 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:17.772 20:20:30 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:17.772 20:20:30 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:11:17.772 20:20:30 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:11:17.772 20:20:30 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:17.772 20:20:30 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:11:17.772 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:11:17.772 20:20:30 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:11:17.772 20:20:30 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:11:17.772 20:20:30 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:17.772 20:20:30 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:17.772 20:20:30 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:11:17.772 20:20:30 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:11:17.772 20:20:30 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:17.772 20:20:30 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:11:17.772 20:20:30 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:17.772 20:20:30 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:17.772 20:20:30 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:11:17.772 20:20:30 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:17.772 20:20:30 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:17.772 20:20:30 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:11:17.772 Found net devices under 0000:da:00.0: mlx_0_0 00:11:17.772 20:20:30 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:17.772 20:20:30 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:17.772 20:20:30 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:17.772 20:20:30 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:11:17.772 20:20:30 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:17.772 20:20:30 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:17.772 20:20:30 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:11:17.772 Found net devices under 0000:da:00.1: mlx_0_1 00:11:17.772 20:20:30 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:17.772 20:20:30 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:17.772 20:20:30 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@414 -- # is_hw=yes 00:11:17.772 20:20:30 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:17.772 20:20:30 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:11:17.772 20:20:30 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:11:17.772 20:20:30 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@420 -- # rdma_device_init 00:11:17.772 20:20:30 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:11:17.772 20:20:30 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@58 -- # uname 00:11:17.772 20:20:30 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:11:17.772 20:20:30 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@62 -- # modprobe ib_cm 00:11:17.772 20:20:30 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@63 -- # modprobe ib_core 00:11:17.772 20:20:30 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@64 -- # modprobe ib_umad 00:11:17.772 20:20:30 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:11:17.772 20:20:30 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@66 -- # modprobe iw_cm 00:11:17.772 20:20:30 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:11:17.772 20:20:30 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:11:17.772 20:20:30 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@502 -- # allocate_nic_ips 00:11:17.772 20:20:30 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:11:17.772 20:20:30 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@73 -- # get_rdma_if_list 00:11:17.772 20:20:30 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:17.772 20:20:30 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:11:17.772 20:20:30 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:11:17.772 20:20:30 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:17.772 20:20:30 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:11:17.772 20:20:30 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:17.772 20:20:30 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:17.772 20:20:30 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:17.772 20:20:30 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@104 -- # echo mlx_0_0 00:11:17.772 20:20:30 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@105 -- # continue 2 00:11:17.772 20:20:30 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:17.772 20:20:30 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:17.772 20:20:30 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:17.772 20:20:30 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:17.772 20:20:30 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:17.772 20:20:30 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@104 -- # echo mlx_0_1 00:11:17.772 20:20:30 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@105 -- # continue 2 00:11:17.772 20:20:30 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:11:17.772 20:20:30 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:11:17.772 20:20:30 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:11:17.772 20:20:30 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:11:17.772 20:20:30 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:17.772 20:20:30 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:17.772 20:20:30 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:11:17.772 20:20:30 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:11:17.772 20:20:30 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:11:17.772 258: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:17.772 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:11:17.772 altname enp218s0f0np0 00:11:17.772 altname ens818f0np0 00:11:17.772 inet 192.168.100.8/24 scope global mlx_0_0 00:11:17.772 valid_lft forever preferred_lft forever 00:11:17.772 20:20:30 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:11:17.772 20:20:30 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:11:17.772 20:20:30 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:11:17.772 20:20:30 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:11:17.772 20:20:30 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:17.772 20:20:30 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:17.772 20:20:30 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:11:17.772 20:20:30 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:11:17.772 20:20:30 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:11:17.772 259: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:17.772 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:11:17.772 altname enp218s0f1np1 00:11:17.772 altname ens818f1np1 00:11:17.772 inet 192.168.100.9/24 scope global mlx_0_1 00:11:17.772 valid_lft forever preferred_lft forever 00:11:17.772 20:20:30 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@422 -- # return 0 00:11:17.772 20:20:30 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:17.772 20:20:30 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:11:17.772 20:20:30 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:11:17.772 20:20:30 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:11:17.772 20:20:30 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@86 -- # get_rdma_if_list 00:11:17.772 20:20:30 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:17.772 20:20:30 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:11:17.772 20:20:30 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:11:17.773 20:20:30 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:17.773 20:20:30 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:11:17.773 20:20:30 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:17.773 20:20:30 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:17.773 20:20:30 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:17.773 20:20:30 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@104 -- # echo mlx_0_0 00:11:17.773 20:20:30 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@105 -- # continue 2 00:11:17.773 20:20:30 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:17.773 20:20:30 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:17.773 20:20:30 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:17.773 20:20:30 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:17.773 20:20:30 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:17.773 20:20:30 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@104 -- # echo mlx_0_1 00:11:17.773 20:20:30 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@105 -- # continue 2 00:11:17.773 20:20:30 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:11:17.773 20:20:30 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:11:17.773 20:20:30 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:11:17.773 20:20:30 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:11:17.773 20:20:30 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:17.773 20:20:30 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:17.773 20:20:30 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:11:17.773 20:20:30 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:11:17.773 20:20:30 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:11:17.773 20:20:30 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:11:17.773 20:20:30 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:17.773 20:20:30 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:17.773 20:20:30 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:11:17.773 192.168.100.9' 00:11:17.773 20:20:30 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:11:17.773 192.168.100.9' 00:11:17.773 20:20:30 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@457 -- # head -n 1 00:11:17.773 20:20:30 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:11:17.773 20:20:30 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:11:17.773 192.168.100.9' 00:11:17.773 20:20:30 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@458 -- # head -n 1 00:11:17.773 20:20:30 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@458 -- # tail -n +2 00:11:17.773 20:20:30 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:11:17.773 20:20:30 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:11:17.773 20:20:30 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:11:17.773 20:20:30 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:11:17.773 20:20:30 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:11:17.773 20:20:30 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:11:17.773 20:20:30 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:11:17.773 20:20:30 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:17.773 20:20:30 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@720 -- # xtrace_disable 00:11:17.773 20:20:30 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:17.773 20:20:30 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@481 -- # nvmfpid=2980001 00:11:17.773 20:20:30 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:17.773 20:20:30 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@482 -- # waitforlisten 2980001 00:11:17.773 20:20:30 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@827 -- # '[' -z 2980001 ']' 00:11:17.773 20:20:30 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:17.773 20:20:30 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@832 -- # local max_retries=100 00:11:17.773 20:20:30 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:17.773 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:17.773 20:20:30 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@836 -- # xtrace_disable 00:11:17.773 20:20:30 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:17.773 [2024-05-16 20:20:30.312116] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:11:17.773 [2024-05-16 20:20:30.312161] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:17.773 EAL: No free 2048 kB hugepages reported on node 1 00:11:17.773 [2024-05-16 20:20:30.374171] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:17.773 [2024-05-16 20:20:30.448445] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:17.773 [2024-05-16 20:20:30.448485] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:17.773 [2024-05-16 20:20:30.448492] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:17.773 [2024-05-16 20:20:30.448498] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:17.773 [2024-05-16 20:20:30.448503] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:17.773 [2024-05-16 20:20:30.448556] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:17.773 [2024-05-16 20:20:30.448584] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:17.773 [2024-05-16 20:20:30.448676] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:17.773 [2024-05-16 20:20:30.448677] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:18.341 20:20:31 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:11:18.341 20:20:31 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@860 -- # return 0 00:11:18.341 20:20:31 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:18.341 20:20:31 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:18.341 20:20:31 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:18.341 20:20:31 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:18.341 20:20:31 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:11:18.341 20:20:31 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:18.341 20:20:31 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:18.341 [2024-05-16 20:20:31.183797] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1dfd9b0/0x1e01ea0) succeed. 00:11:18.341 [2024-05-16 20:20:31.194161] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1dfeff0/0x1e43530) succeed. 00:11:18.341 20:20:31 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:18.341 20:20:31 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:18.341 20:20:31 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:18.341 20:20:31 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:18.341 Malloc0 00:11:18.341 20:20:31 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:18.341 20:20:31 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:11:18.341 20:20:31 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:18.341 20:20:31 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:18.600 Malloc1 00:11:18.600 20:20:31 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:18.600 20:20:31 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:11:18.600 20:20:31 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:18.600 20:20:31 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:18.600 20:20:31 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:18.600 20:20:31 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:18.600 20:20:31 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:18.600 20:20:31 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:18.600 20:20:31 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:18.600 20:20:31 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:18.600 20:20:31 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:18.600 20:20:31 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:18.600 20:20:31 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:18.600 20:20:31 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:11:18.600 20:20:31 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:18.600 20:20:31 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:18.600 [2024-05-16 20:20:31.385197] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:11:18.600 [2024-05-16 20:20:31.385536] rdma.c:3032:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:11:18.600 20:20:31 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:18.600 20:20:31 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:11:18.600 20:20:31 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:18.600 20:20:31 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:18.600 20:20:31 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:18.600 20:20:31 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 4420 00:11:18.600 00:11:18.600 Discovery Log Number of Records 2, Generation counter 2 00:11:18.600 =====Discovery Log Entry 0====== 00:11:18.600 trtype: rdma 00:11:18.600 adrfam: ipv4 00:11:18.600 subtype: current discovery subsystem 00:11:18.600 treq: not required 00:11:18.600 portid: 0 00:11:18.600 trsvcid: 4420 00:11:18.600 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:11:18.600 traddr: 192.168.100.8 00:11:18.600 eflags: explicit discovery connections, duplicate discovery information 00:11:18.600 rdma_prtype: not specified 00:11:18.600 rdma_qptype: connected 00:11:18.600 rdma_cms: rdma-cm 00:11:18.600 rdma_pkey: 0x0000 00:11:18.600 =====Discovery Log Entry 1====== 00:11:18.600 trtype: rdma 00:11:18.600 adrfam: ipv4 00:11:18.600 subtype: nvme subsystem 00:11:18.600 treq: not required 00:11:18.600 portid: 0 00:11:18.600 trsvcid: 4420 00:11:18.600 subnqn: nqn.2016-06.io.spdk:cnode1 00:11:18.600 traddr: 192.168.100.8 00:11:18.600 eflags: none 00:11:18.600 rdma_prtype: not specified 00:11:18.600 rdma_qptype: connected 00:11:18.600 rdma_cms: rdma-cm 00:11:18.600 rdma_pkey: 0x0000 00:11:18.600 20:20:31 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:11:18.600 20:20:31 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:11:18.600 20:20:31 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:11:18.600 20:20:31 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:18.600 20:20:31 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:11:18.600 20:20:31 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:11:18.600 20:20:31 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:18.600 20:20:31 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:11:18.600 20:20:31 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:18.600 20:20:31 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:11:18.600 20:20:31 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:11:19.535 20:20:32 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:11:19.535 20:20:32 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1194 -- # local i=0 00:11:19.535 20:20:32 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:11:19.535 20:20:32 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1196 -- # [[ -n 2 ]] 00:11:19.535 20:20:32 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1197 -- # nvme_device_counter=2 00:11:19.535 20:20:32 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # sleep 2 00:11:22.064 20:20:34 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:11:22.064 20:20:34 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:11:22.064 20:20:34 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:11:22.064 20:20:34 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # nvme_devices=2 00:11:22.064 20:20:34 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:11:22.064 20:20:34 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # return 0 00:11:22.064 20:20:34 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:11:22.064 20:20:34 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:11:22.064 20:20:34 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:22.064 20:20:34 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:11:22.064 20:20:34 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:11:22.064 20:20:34 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:22.064 20:20:34 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:11:22.064 20:20:34 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:22.064 20:20:34 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:11:22.064 20:20:34 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:11:22.064 20:20:34 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:22.064 20:20:34 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:11:22.064 20:20:34 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:11:22.064 20:20:34 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:22.064 20:20:34 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:11:22.064 /dev/nvme0n1 ]] 00:11:22.064 20:20:34 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:11:22.064 20:20:34 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:11:22.064 20:20:34 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:11:22.064 20:20:34 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:22.064 20:20:34 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:11:22.064 20:20:34 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:11:22.064 20:20:34 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:22.064 20:20:34 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:11:22.064 20:20:34 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:22.064 20:20:34 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:11:22.064 20:20:34 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:11:22.064 20:20:34 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:22.064 20:20:34 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:11:22.064 20:20:34 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:11:22.064 20:20:34 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:22.064 20:20:34 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:11:22.064 20:20:34 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:22.629 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:22.629 20:20:35 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:22.629 20:20:35 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1215 -- # local i=0 00:11:22.629 20:20:35 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:11:22.629 20:20:35 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:22.629 20:20:35 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:11:22.629 20:20:35 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:22.629 20:20:35 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # return 0 00:11:22.629 20:20:35 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:11:22.630 20:20:35 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:22.630 20:20:35 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:22.630 20:20:35 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:22.630 20:20:35 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:22.630 20:20:35 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:11:22.630 20:20:35 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:11:22.630 20:20:35 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:22.630 20:20:35 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@117 -- # sync 00:11:22.630 20:20:35 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:11:22.630 20:20:35 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:11:22.630 20:20:35 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@120 -- # set +e 00:11:22.630 20:20:35 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:22.630 20:20:35 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:11:22.630 rmmod nvme_rdma 00:11:22.630 rmmod nvme_fabrics 00:11:22.630 20:20:35 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:22.630 20:20:35 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set -e 00:11:22.630 20:20:35 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@125 -- # return 0 00:11:22.630 20:20:35 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@489 -- # '[' -n 2980001 ']' 00:11:22.630 20:20:35 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@490 -- # killprocess 2980001 00:11:22.630 20:20:35 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@946 -- # '[' -z 2980001 ']' 00:11:22.630 20:20:35 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@950 -- # kill -0 2980001 00:11:22.630 20:20:35 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@951 -- # uname 00:11:22.630 20:20:35 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:11:22.630 20:20:35 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2980001 00:11:22.888 20:20:35 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:11:22.888 20:20:35 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:11:22.888 20:20:35 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2980001' 00:11:22.888 killing process with pid 2980001 00:11:22.888 20:20:35 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@965 -- # kill 2980001 00:11:22.888 [2024-05-16 20:20:35.629559] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:11:22.888 20:20:35 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@970 -- # wait 2980001 00:11:22.888 [2024-05-16 20:20:35.709155] rdma.c:2885:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:11:23.147 20:20:35 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:23.147 20:20:35 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:11:23.147 00:11:23.147 real 0m11.931s 00:11:23.147 user 0m23.485s 00:11:23.147 sys 0m5.196s 00:11:23.147 20:20:35 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:23.147 20:20:35 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:23.147 ************************************ 00:11:23.147 END TEST nvmf_nvme_cli 00:11:23.147 ************************************ 00:11:23.147 20:20:35 nvmf_rdma -- nvmf/nvmf.sh@40 -- # [[ 0 -eq 1 ]] 00:11:23.147 20:20:35 nvmf_rdma -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=rdma 00:11:23.147 20:20:35 nvmf_rdma -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:11:23.147 20:20:35 nvmf_rdma -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:23.147 20:20:35 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:11:23.147 ************************************ 00:11:23.147 START TEST nvmf_host_management 00:11:23.147 ************************************ 00:11:23.147 20:20:35 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=rdma 00:11:23.147 * Looking for test storage... 00:11:23.147 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:11:23.147 20:20:36 nvmf_rdma.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:11:23.147 20:20:36 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:11:23.147 20:20:36 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:23.147 20:20:36 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:23.147 20:20:36 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:23.147 20:20:36 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:23.147 20:20:36 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:23.147 20:20:36 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:23.147 20:20:36 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:23.147 20:20:36 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:23.147 20:20:36 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:23.147 20:20:36 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:23.147 20:20:36 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:11:23.147 20:20:36 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:11:23.147 20:20:36 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:23.147 20:20:36 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:23.147 20:20:36 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:23.147 20:20:36 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:23.147 20:20:36 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:11:23.147 20:20:36 nvmf_rdma.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:23.147 20:20:36 nvmf_rdma.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:23.147 20:20:36 nvmf_rdma.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:23.147 20:20:36 nvmf_rdma.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:23.147 20:20:36 nvmf_rdma.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:23.147 20:20:36 nvmf_rdma.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:23.148 20:20:36 nvmf_rdma.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:11:23.148 20:20:36 nvmf_rdma.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:23.148 20:20:36 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:11:23.148 20:20:36 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:23.148 20:20:36 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:23.148 20:20:36 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:23.148 20:20:36 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:23.148 20:20:36 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:23.148 20:20:36 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:23.148 20:20:36 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:23.148 20:20:36 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:23.148 20:20:36 nvmf_rdma.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:23.148 20:20:36 nvmf_rdma.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:23.148 20:20:36 nvmf_rdma.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:11:23.148 20:20:36 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:11:23.148 20:20:36 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:23.148 20:20:36 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:23.148 20:20:36 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:23.148 20:20:36 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:23.148 20:20:36 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:23.148 20:20:36 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:23.148 20:20:36 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:23.148 20:20:36 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:23.148 20:20:36 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:23.148 20:20:36 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@285 -- # xtrace_disable 00:11:23.148 20:20:36 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:29.714 20:20:41 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:29.714 20:20:41 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@291 -- # pci_devs=() 00:11:29.714 20:20:41 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:29.714 20:20:41 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:29.714 20:20:41 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:29.714 20:20:41 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:29.714 20:20:41 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:29.714 20:20:41 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@295 -- # net_devs=() 00:11:29.714 20:20:41 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:29.714 20:20:41 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@296 -- # e810=() 00:11:29.714 20:20:41 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@296 -- # local -ga e810 00:11:29.714 20:20:41 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@297 -- # x722=() 00:11:29.714 20:20:41 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@297 -- # local -ga x722 00:11:29.714 20:20:41 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@298 -- # mlx=() 00:11:29.714 20:20:41 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@298 -- # local -ga mlx 00:11:29.714 20:20:41 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:29.714 20:20:41 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:29.714 20:20:41 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:29.714 20:20:41 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:29.714 20:20:41 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:29.714 20:20:41 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:29.714 20:20:41 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:29.714 20:20:41 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:29.714 20:20:41 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:29.714 20:20:41 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:29.714 20:20:41 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:29.714 20:20:41 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:29.714 20:20:41 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:11:29.714 20:20:41 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:11:29.714 20:20:41 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:11:29.714 20:20:41 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:11:29.714 20:20:41 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:11:29.714 20:20:41 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:29.714 20:20:41 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:29.714 20:20:41 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:11:29.714 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:11:29.714 20:20:41 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:11:29.714 20:20:41 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:11:29.714 20:20:41 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:29.714 20:20:41 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:29.714 20:20:41 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:11:29.714 20:20:41 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:11:29.714 20:20:41 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:29.714 20:20:41 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:11:29.714 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:11:29.714 20:20:41 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:11:29.714 20:20:41 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:11:29.714 20:20:41 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:29.714 20:20:41 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:29.714 20:20:41 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:11:29.714 20:20:41 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:11:29.714 20:20:41 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:29.714 20:20:41 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:11:29.714 20:20:41 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:29.714 20:20:41 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:29.714 20:20:41 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:11:29.714 20:20:41 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:29.714 20:20:41 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:29.714 20:20:41 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:11:29.714 Found net devices under 0000:da:00.0: mlx_0_0 00:11:29.714 20:20:41 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:29.714 20:20:41 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:29.714 20:20:41 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:29.714 20:20:41 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:11:29.714 20:20:41 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:29.714 20:20:41 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:29.714 20:20:41 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:11:29.714 Found net devices under 0000:da:00.1: mlx_0_1 00:11:29.714 20:20:41 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:29.714 20:20:41 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:29.714 20:20:41 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@414 -- # is_hw=yes 00:11:29.714 20:20:41 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:29.714 20:20:41 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:11:29.714 20:20:41 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:11:29.714 20:20:41 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@420 -- # rdma_device_init 00:11:29.714 20:20:41 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:11:29.714 20:20:41 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@58 -- # uname 00:11:29.714 20:20:41 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:11:29.714 20:20:41 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@62 -- # modprobe ib_cm 00:11:29.714 20:20:41 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@63 -- # modprobe ib_core 00:11:29.714 20:20:41 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@64 -- # modprobe ib_umad 00:11:29.714 20:20:41 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:11:29.714 20:20:41 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@66 -- # modprobe iw_cm 00:11:29.714 20:20:41 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:11:29.714 20:20:41 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:11:29.714 20:20:41 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@502 -- # allocate_nic_ips 00:11:29.714 20:20:41 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:11:29.714 20:20:41 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@73 -- # get_rdma_if_list 00:11:29.714 20:20:41 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:29.714 20:20:41 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:11:29.714 20:20:41 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:11:29.714 20:20:41 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:29.714 20:20:41 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:11:29.714 20:20:41 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:29.714 20:20:41 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:29.714 20:20:41 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:29.714 20:20:41 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@104 -- # echo mlx_0_0 00:11:29.714 20:20:41 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@105 -- # continue 2 00:11:29.714 20:20:41 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:29.714 20:20:41 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:29.714 20:20:41 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:29.714 20:20:41 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:29.714 20:20:41 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:29.714 20:20:41 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@104 -- # echo mlx_0_1 00:11:29.715 20:20:41 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@105 -- # continue 2 00:11:29.715 20:20:41 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:11:29.715 20:20:41 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:11:29.715 20:20:41 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:11:29.715 20:20:41 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:29.715 20:20:41 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:11:29.715 20:20:41 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:29.715 20:20:41 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:11:29.715 20:20:41 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:11:29.715 20:20:42 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:11:29.715 258: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:29.715 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:11:29.715 altname enp218s0f0np0 00:11:29.715 altname ens818f0np0 00:11:29.715 inet 192.168.100.8/24 scope global mlx_0_0 00:11:29.715 valid_lft forever preferred_lft forever 00:11:29.715 20:20:42 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:11:29.715 20:20:42 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:11:29.715 20:20:42 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:11:29.715 20:20:42 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:29.715 20:20:42 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:11:29.715 20:20:42 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:29.715 20:20:42 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:11:29.715 20:20:42 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:11:29.715 20:20:42 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:11:29.715 259: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:29.715 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:11:29.715 altname enp218s0f1np1 00:11:29.715 altname ens818f1np1 00:11:29.715 inet 192.168.100.9/24 scope global mlx_0_1 00:11:29.715 valid_lft forever preferred_lft forever 00:11:29.715 20:20:42 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@422 -- # return 0 00:11:29.715 20:20:42 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:29.715 20:20:42 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:11:29.715 20:20:42 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:11:29.715 20:20:42 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:11:29.715 20:20:42 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@86 -- # get_rdma_if_list 00:11:29.715 20:20:42 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:29.715 20:20:42 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:11:29.715 20:20:42 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:11:29.715 20:20:42 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:29.715 20:20:42 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:11:29.715 20:20:42 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:29.715 20:20:42 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:29.715 20:20:42 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:29.715 20:20:42 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@104 -- # echo mlx_0_0 00:11:29.715 20:20:42 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@105 -- # continue 2 00:11:29.715 20:20:42 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:29.715 20:20:42 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:29.715 20:20:42 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:29.715 20:20:42 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:29.715 20:20:42 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:29.715 20:20:42 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@104 -- # echo mlx_0_1 00:11:29.715 20:20:42 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@105 -- # continue 2 00:11:29.715 20:20:42 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:11:29.715 20:20:42 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:11:29.715 20:20:42 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:11:29.715 20:20:42 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:11:29.715 20:20:42 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:29.715 20:20:42 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:29.715 20:20:42 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:11:29.715 20:20:42 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:11:29.715 20:20:42 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:11:29.715 20:20:42 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:11:29.715 20:20:42 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:29.715 20:20:42 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:29.715 20:20:42 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:11:29.715 192.168.100.9' 00:11:29.715 20:20:42 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@457 -- # head -n 1 00:11:29.715 20:20:42 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:11:29.715 192.168.100.9' 00:11:29.715 20:20:42 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:11:29.715 20:20:42 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:11:29.715 192.168.100.9' 00:11:29.715 20:20:42 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@458 -- # tail -n +2 00:11:29.715 20:20:42 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@458 -- # head -n 1 00:11:29.715 20:20:42 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:11:29.715 20:20:42 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:11:29.715 20:20:42 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:11:29.715 20:20:42 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:11:29.715 20:20:42 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:11:29.715 20:20:42 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:11:29.715 20:20:42 nvmf_rdma.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:11:29.715 20:20:42 nvmf_rdma.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:11:29.715 20:20:42 nvmf_rdma.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:11:29.715 20:20:42 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:29.715 20:20:42 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@720 -- # xtrace_disable 00:11:29.715 20:20:42 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:29.715 20:20:42 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=2984530 00:11:29.715 20:20:42 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 2984530 00:11:29.715 20:20:42 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@827 -- # '[' -z 2984530 ']' 00:11:29.715 20:20:42 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:29.715 20:20:42 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@832 -- # local max_retries=100 00:11:29.715 20:20:42 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:29.715 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:29.715 20:20:42 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@836 -- # xtrace_disable 00:11:29.715 20:20:42 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:29.715 20:20:42 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:11:29.715 [2024-05-16 20:20:42.155164] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:11:29.715 [2024-05-16 20:20:42.155205] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:29.715 EAL: No free 2048 kB hugepages reported on node 1 00:11:29.715 [2024-05-16 20:20:42.214704] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:29.715 [2024-05-16 20:20:42.293254] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:29.715 [2024-05-16 20:20:42.293291] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:29.715 [2024-05-16 20:20:42.293298] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:29.715 [2024-05-16 20:20:42.293304] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:29.715 [2024-05-16 20:20:42.293309] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:29.715 [2024-05-16 20:20:42.293346] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:29.715 [2024-05-16 20:20:42.293438] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:29.715 [2024-05-16 20:20:42.293543] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:29.715 [2024-05-16 20:20:42.293544] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:11:30.284 20:20:42 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:11:30.284 20:20:42 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@860 -- # return 0 00:11:30.284 20:20:42 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:30.284 20:20:42 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:30.284 20:20:42 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:30.284 20:20:43 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:30.284 20:20:43 nvmf_rdma.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:11:30.284 20:20:43 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:30.284 20:20:43 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:30.284 [2024-05-16 20:20:43.036996] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x13a0ca0/0x13a5190) succeed. 00:11:30.284 [2024-05-16 20:20:43.047139] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x13a22e0/0x13e6820) succeed. 00:11:30.284 20:20:43 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:30.284 20:20:43 nvmf_rdma.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:11:30.284 20:20:43 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@720 -- # xtrace_disable 00:11:30.284 20:20:43 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:30.284 20:20:43 nvmf_rdma.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:11:30.284 20:20:43 nvmf_rdma.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:11:30.284 20:20:43 nvmf_rdma.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:11:30.284 20:20:43 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:30.284 20:20:43 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:30.284 Malloc0 00:11:30.284 [2024-05-16 20:20:43.219997] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:11:30.284 [2024-05-16 20:20:43.220377] rdma.c:3032:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:11:30.284 20:20:43 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:30.284 20:20:43 nvmf_rdma.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:11:30.284 20:20:43 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:30.284 20:20:43 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:30.284 20:20:43 nvmf_rdma.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=2984794 00:11:30.284 20:20:43 nvmf_rdma.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 2984794 /var/tmp/bdevperf.sock 00:11:30.284 20:20:43 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@827 -- # '[' -z 2984794 ']' 00:11:30.284 20:20:43 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:30.284 20:20:43 nvmf_rdma.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:11:30.284 20:20:43 nvmf_rdma.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:11:30.284 20:20:43 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@832 -- # local max_retries=100 00:11:30.284 20:20:43 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:30.284 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:30.284 20:20:43 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:11:30.284 20:20:43 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@836 -- # xtrace_disable 00:11:30.284 20:20:43 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:11:30.284 20:20:43 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:30.284 20:20:43 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:11:30.284 20:20:43 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:11:30.284 { 00:11:30.284 "params": { 00:11:30.284 "name": "Nvme$subsystem", 00:11:30.284 "trtype": "$TEST_TRANSPORT", 00:11:30.284 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:30.284 "adrfam": "ipv4", 00:11:30.284 "trsvcid": "$NVMF_PORT", 00:11:30.284 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:30.284 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:30.284 "hdgst": ${hdgst:-false}, 00:11:30.284 "ddgst": ${ddgst:-false} 00:11:30.284 }, 00:11:30.284 "method": "bdev_nvme_attach_controller" 00:11:30.284 } 00:11:30.284 EOF 00:11:30.284 )") 00:11:30.284 20:20:43 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:11:30.543 20:20:43 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:11:30.543 20:20:43 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:11:30.543 20:20:43 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:11:30.543 "params": { 00:11:30.543 "name": "Nvme0", 00:11:30.543 "trtype": "rdma", 00:11:30.543 "traddr": "192.168.100.8", 00:11:30.543 "adrfam": "ipv4", 00:11:30.543 "trsvcid": "4420", 00:11:30.543 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:11:30.543 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:11:30.543 "hdgst": false, 00:11:30.543 "ddgst": false 00:11:30.543 }, 00:11:30.543 "method": "bdev_nvme_attach_controller" 00:11:30.543 }' 00:11:30.543 [2024-05-16 20:20:43.309243] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:11:30.543 [2024-05-16 20:20:43.309288] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2984794 ] 00:11:30.543 EAL: No free 2048 kB hugepages reported on node 1 00:11:30.543 [2024-05-16 20:20:43.369509] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:30.543 [2024-05-16 20:20:43.443642] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:30.809 Running I/O for 10 seconds... 00:11:31.377 20:20:44 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:11:31.377 20:20:44 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@860 -- # return 0 00:11:31.377 20:20:44 nvmf_rdma.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:11:31.377 20:20:44 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:31.377 20:20:44 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:31.377 20:20:44 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:31.377 20:20:44 nvmf_rdma.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:31.377 20:20:44 nvmf_rdma.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:11:31.377 20:20:44 nvmf_rdma.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:11:31.377 20:20:44 nvmf_rdma.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:11:31.377 20:20:44 nvmf_rdma.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:11:31.377 20:20:44 nvmf_rdma.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:11:31.377 20:20:44 nvmf_rdma.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:11:31.377 20:20:44 nvmf_rdma.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:11:31.377 20:20:44 nvmf_rdma.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:11:31.377 20:20:44 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:31.377 20:20:44 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:31.377 20:20:44 nvmf_rdma.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:11:31.377 20:20:44 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:31.377 20:20:44 nvmf_rdma.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=1580 00:11:31.377 20:20:44 nvmf_rdma.nvmf_host_management -- target/host_management.sh@58 -- # '[' 1580 -ge 100 ']' 00:11:31.377 20:20:44 nvmf_rdma.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:11:31.377 20:20:44 nvmf_rdma.nvmf_host_management -- target/host_management.sh@60 -- # break 00:11:31.378 20:20:44 nvmf_rdma.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:11:31.378 20:20:44 nvmf_rdma.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:11:31.378 20:20:44 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:31.378 20:20:44 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:31.378 20:20:44 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:31.378 20:20:44 nvmf_rdma.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:11:31.378 20:20:44 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:31.378 20:20:44 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:31.378 20:20:44 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:31.378 20:20:44 nvmf_rdma.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:11:32.313 [2024-05-16 20:20:45.209570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:87168 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001901f900 len:0x10000 key:0x182700 00:11:32.313 [2024-05-16 20:20:45.209604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:a2e0 p:0 m:0 dnr:0 00:11:32.313 [2024-05-16 20:20:45.209620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:87296 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001900f880 len:0x10000 key:0x182700 00:11:32.313 [2024-05-16 20:20:45.209627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:a2e0 p:0 m:0 dnr:0 00:11:32.313 [2024-05-16 20:20:45.209636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:87424 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018eeff80 len:0x10000 key:0x182600 00:11:32.313 [2024-05-16 20:20:45.209643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:a2e0 p:0 m:0 dnr:0 00:11:32.313 [2024-05-16 20:20:45.209651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:87552 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018edff00 len:0x10000 key:0x182600 00:11:32.313 [2024-05-16 20:20:45.209657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:a2e0 p:0 m:0 dnr:0 00:11:32.313 [2024-05-16 20:20:45.209665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:87680 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018ecfe80 len:0x10000 key:0x182600 00:11:32.314 [2024-05-16 20:20:45.209671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:a2e0 p:0 m:0 dnr:0 00:11:32.314 [2024-05-16 20:20:45.209679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:87808 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018ebfe00 len:0x10000 key:0x182600 00:11:32.314 [2024-05-16 20:20:45.209686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:a2e0 p:0 m:0 dnr:0 00:11:32.314 [2024-05-16 20:20:45.209694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:87936 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018eafd80 len:0x10000 key:0x182600 00:11:32.314 [2024-05-16 20:20:45.209701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:a2e0 p:0 m:0 dnr:0 00:11:32.314 [2024-05-16 20:20:45.209709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:88064 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e9fd00 len:0x10000 key:0x182600 00:11:32.314 [2024-05-16 20:20:45.209716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:a2e0 p:0 m:0 dnr:0 00:11:32.314 [2024-05-16 20:20:45.209723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:88192 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e8fc80 len:0x10000 key:0x182600 00:11:32.314 [2024-05-16 20:20:45.209730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:a2e0 p:0 m:0 dnr:0 00:11:32.314 [2024-05-16 20:20:45.209738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:88320 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e7fc00 len:0x10000 key:0x182600 00:11:32.314 [2024-05-16 20:20:45.209745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:a2e0 p:0 m:0 dnr:0 00:11:32.314 [2024-05-16 20:20:45.209752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:88448 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e6fb80 len:0x10000 key:0x182600 00:11:32.314 [2024-05-16 20:20:45.209759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:a2e0 p:0 m:0 dnr:0 00:11:32.314 [2024-05-16 20:20:45.209771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:88576 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e5fb00 len:0x10000 key:0x182600 00:11:32.314 [2024-05-16 20:20:45.209778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:a2e0 p:0 m:0 dnr:0 00:11:32.314 [2024-05-16 20:20:45.209786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:88704 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e4fa80 len:0x10000 key:0x182600 00:11:32.314 [2024-05-16 20:20:45.209793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:a2e0 p:0 m:0 dnr:0 00:11:32.314 [2024-05-16 20:20:45.209801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:88832 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e3fa00 len:0x10000 key:0x182600 00:11:32.314 [2024-05-16 20:20:45.209807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:a2e0 p:0 m:0 dnr:0 00:11:32.314 [2024-05-16 20:20:45.209817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:88960 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e2f980 len:0x10000 key:0x182600 00:11:32.314 [2024-05-16 20:20:45.209824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:a2e0 p:0 m:0 dnr:0 00:11:32.314 [2024-05-16 20:20:45.209833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:89088 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e1f900 len:0x10000 key:0x182600 00:11:32.314 [2024-05-16 20:20:45.209840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:a2e0 p:0 m:0 dnr:0 00:11:32.314 [2024-05-16 20:20:45.209848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:89216 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e0f880 len:0x10000 key:0x182600 00:11:32.314 [2024-05-16 20:20:45.209854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:a2e0 p:0 m:0 dnr:0 00:11:32.314 [2024-05-16 20:20:45.209862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:89344 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200003a4b040 len:0x10000 key:0x182100 00:11:32.314 [2024-05-16 20:20:45.209869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:a2e0 p:0 m:0 dnr:0 00:11:32.314 [2024-05-16 20:20:45.209876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:89472 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200003a3afc0 len:0x10000 key:0x182100 00:11:32.314 [2024-05-16 20:20:45.209883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:a2e0 p:0 m:0 dnr:0 00:11:32.314 [2024-05-16 20:20:45.209891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:89600 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200003a2af40 len:0x10000 key:0x182100 00:11:32.314 [2024-05-16 20:20:45.209899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:a2e0 p:0 m:0 dnr:0 00:11:32.314 [2024-05-16 20:20:45.209907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:89728 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200003a1aec0 len:0x10000 key:0x182100 00:11:32.314 [2024-05-16 20:20:45.209914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:a2e0 p:0 m:0 dnr:0 00:11:32.314 [2024-05-16 20:20:45.209922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:89856 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200003a0ae40 len:0x10000 key:0x182100 00:11:32.314 [2024-05-16 20:20:45.209928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:a2e0 p:0 m:0 dnr:0 00:11:32.314 [2024-05-16 20:20:45.209936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:89984 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000138ea980 len:0x10000 key:0x182500 00:11:32.314 [2024-05-16 20:20:45.209949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:a2e0 p:0 m:0 dnr:0 00:11:32.314 [2024-05-16 20:20:45.209957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:84992 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000dcfe000 len:0x10000 key:0x182400 00:11:32.314 [2024-05-16 20:20:45.209965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:a2e0 p:0 m:0 dnr:0 00:11:32.314 [2024-05-16 20:20:45.209973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:85120 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000dcdd000 len:0x10000 key:0x182400 00:11:32.314 [2024-05-16 20:20:45.209981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:a2e0 p:0 m:0 dnr:0 00:11:32.314 [2024-05-16 20:20:45.209989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:85248 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000dcbc000 len:0x10000 key:0x182400 00:11:32.314 [2024-05-16 20:20:45.209996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:a2e0 p:0 m:0 dnr:0 00:11:32.314 [2024-05-16 20:20:45.210004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:85376 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000dc9b000 len:0x10000 key:0x182400 00:11:32.314 [2024-05-16 20:20:45.210010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:a2e0 p:0 m:0 dnr:0 00:11:32.314 [2024-05-16 20:20:45.210018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:85504 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000dc7a000 len:0x10000 key:0x182400 00:11:32.314 [2024-05-16 20:20:45.210025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:a2e0 p:0 m:0 dnr:0 00:11:32.314 [2024-05-16 20:20:45.210033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:85632 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000dc59000 len:0x10000 key:0x182400 00:11:32.314 [2024-05-16 20:20:45.210040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:a2e0 p:0 m:0 dnr:0 00:11:32.314 [2024-05-16 20:20:45.210048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:85760 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000dc38000 len:0x10000 key:0x182400 00:11:32.314 [2024-05-16 20:20:45.210055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:a2e0 p:0 m:0 dnr:0 00:11:32.314 [2024-05-16 20:20:45.210063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:85888 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000dc17000 len:0x10000 key:0x182400 00:11:32.314 [2024-05-16 20:20:45.210070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:a2e0 p:0 m:0 dnr:0 00:11:32.314 [2024-05-16 20:20:45.210080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:86016 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c588000 len:0x10000 key:0x182400 00:11:32.314 [2024-05-16 20:20:45.210088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:a2e0 p:0 m:0 dnr:0 00:11:32.314 [2024-05-16 20:20:45.210095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:86144 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ba93000 len:0x10000 key:0x182400 00:11:32.314 [2024-05-16 20:20:45.210102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:a2e0 p:0 m:0 dnr:0 00:11:32.314 [2024-05-16 20:20:45.210110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:86272 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000dbf6000 len:0x10000 key:0x182400 00:11:32.314 [2024-05-16 20:20:45.210119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:a2e0 p:0 m:0 dnr:0 00:11:32.314 [2024-05-16 20:20:45.210127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:86400 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000dbd5000 len:0x10000 key:0x182400 00:11:32.314 [2024-05-16 20:20:45.210134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:a2e0 p:0 m:0 dnr:0 00:11:32.314 [2024-05-16 20:20:45.210142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:86528 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000dbb4000 len:0x10000 key:0x182400 00:11:32.314 [2024-05-16 20:20:45.210148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:a2e0 p:0 m:0 dnr:0 00:11:32.314 [2024-05-16 20:20:45.210156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:86656 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000db93000 len:0x10000 key:0x182400 00:11:32.314 [2024-05-16 20:20:45.210163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:a2e0 p:0 m:0 dnr:0 00:11:32.314 [2024-05-16 20:20:45.210170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:86784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000db72000 len:0x10000 key:0x182400 00:11:32.314 [2024-05-16 20:20:45.210177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:a2e0 p:0 m:0 dnr:0 00:11:32.314 [2024-05-16 20:20:45.210185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:86912 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000db51000 len:0x10000 key:0x182400 00:11:32.314 [2024-05-16 20:20:45.210191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:a2e0 p:0 m:0 dnr:0 00:11:32.314 [2024-05-16 20:20:45.210199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:87040 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000db30000 len:0x10000 key:0x182400 00:11:32.314 [2024-05-16 20:20:45.210206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:a2e0 p:0 m:0 dnr:0 00:11:32.314 [2024-05-16 20:20:45.210214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:90112 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000138da900 len:0x10000 key:0x182500 00:11:32.314 [2024-05-16 20:20:45.210220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:a2e0 p:0 m:0 dnr:0 00:11:32.315 [2024-05-16 20:20:45.210228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:90240 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000138ca880 len:0x10000 key:0x182500 00:11:32.315 [2024-05-16 20:20:45.210235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:a2e0 p:0 m:0 dnr:0 00:11:32.315 [2024-05-16 20:20:45.210243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:90368 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000138ba800 len:0x10000 key:0x182500 00:11:32.315 [2024-05-16 20:20:45.210250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:a2e0 p:0 m:0 dnr:0 00:11:32.315 [2024-05-16 20:20:45.210257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:90496 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000138aa780 len:0x10000 key:0x182500 00:11:32.315 [2024-05-16 20:20:45.210264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:a2e0 p:0 m:0 dnr:0 00:11:32.315 [2024-05-16 20:20:45.210272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:90624 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001389a700 len:0x10000 key:0x182500 00:11:32.315 [2024-05-16 20:20:45.210280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:a2e0 p:0 m:0 dnr:0 00:11:32.315 [2024-05-16 20:20:45.210288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:90752 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001388a680 len:0x10000 key:0x182500 00:11:32.315 [2024-05-16 20:20:45.210294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:a2e0 p:0 m:0 dnr:0 00:11:32.315 [2024-05-16 20:20:45.210302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:90880 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001387a600 len:0x10000 key:0x182500 00:11:32.315 [2024-05-16 20:20:45.210309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:a2e0 p:0 m:0 dnr:0 00:11:32.315 [2024-05-16 20:20:45.210319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:91008 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001386a580 len:0x10000 key:0x182500 00:11:32.315 [2024-05-16 20:20:45.210325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:a2e0 p:0 m:0 dnr:0 00:11:32.315 [2024-05-16 20:20:45.210333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:91136 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001385a500 len:0x10000 key:0x182500 00:11:32.315 [2024-05-16 20:20:45.210340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:a2e0 p:0 m:0 dnr:0 00:11:32.315 [2024-05-16 20:20:45.210348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:91264 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001384a480 len:0x10000 key:0x182500 00:11:32.315 [2024-05-16 20:20:45.210354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:a2e0 p:0 m:0 dnr:0 00:11:32.315 [2024-05-16 20:20:45.210362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:91392 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001383a400 len:0x10000 key:0x182500 00:11:32.315 [2024-05-16 20:20:45.210369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:a2e0 p:0 m:0 dnr:0 00:11:32.315 [2024-05-16 20:20:45.210377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:91520 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001382a380 len:0x10000 key:0x182500 00:11:32.315 [2024-05-16 20:20:45.210383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:a2e0 p:0 m:0 dnr:0 00:11:32.315 [2024-05-16 20:20:45.210391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91648 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001381a300 len:0x10000 key:0x182500 00:11:32.315 [2024-05-16 20:20:45.210398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:a2e0 p:0 m:0 dnr:0 00:11:32.315 [2024-05-16 20:20:45.210405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:91776 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001380a280 len:0x10000 key:0x182500 00:11:32.315 [2024-05-16 20:20:45.210412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:a2e0 p:0 m:0 dnr:0 00:11:32.315 [2024-05-16 20:20:45.210419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:91904 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000192d1e80 len:0x10000 key:0x182800 00:11:32.315 [2024-05-16 20:20:45.210430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:a2e0 p:0 m:0 dnr:0 00:11:32.315 [2024-05-16 20:20:45.210438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:92032 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000192c1e00 len:0x10000 key:0x182800 00:11:32.315 [2024-05-16 20:20:45.210446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:a2e0 p:0 m:0 dnr:0 00:11:32.315 [2024-05-16 20:20:45.210454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:92160 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000192b1d80 len:0x10000 key:0x182800 00:11:32.315 [2024-05-16 20:20:45.210461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:a2e0 p:0 m:0 dnr:0 00:11:32.315 [2024-05-16 20:20:45.210469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:92288 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000192a1d00 len:0x10000 key:0x182800 00:11:32.315 [2024-05-16 20:20:45.210476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:a2e0 p:0 m:0 dnr:0 00:11:32.315 [2024-05-16 20:20:45.210484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:92416 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019291c80 len:0x10000 key:0x182800 00:11:32.315 [2024-05-16 20:20:45.210490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:a2e0 p:0 m:0 dnr:0 00:11:32.315 [2024-05-16 20:20:45.210498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:92544 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019281c00 len:0x10000 key:0x182800 00:11:32.315 [2024-05-16 20:20:45.210504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:a2e0 p:0 m:0 dnr:0 00:11:32.315 [2024-05-16 20:20:45.210512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:92672 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019271b80 len:0x10000 key:0x182800 00:11:32.315 [2024-05-16 20:20:45.210519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:a2e0 p:0 m:0 dnr:0 00:11:32.315 [2024-05-16 20:20:45.210527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:92800 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019261b00 len:0x10000 key:0x182800 00:11:32.315 [2024-05-16 20:20:45.210533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:a2e0 p:0 m:0 dnr:0 00:11:32.315 [2024-05-16 20:20:45.210541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:92928 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019251a80 len:0x10000 key:0x182800 00:11:32.315 [2024-05-16 20:20:45.210548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:a2e0 p:0 m:0 dnr:0 00:11:32.315 [2024-05-16 20:20:45.210557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:93056 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019241a00 len:0x10000 key:0x182800 00:11:32.315 [2024-05-16 20:20:45.210563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:a2e0 p:0 m:0 dnr:0 00:11:32.315 [2024-05-16 20:20:45.212398] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200019201580 was disconnected and freed. reset controller. 00:11:32.315 [2024-05-16 20:20:45.213308] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:11:32.315 task offset: 87168 on job bdev=Nvme0n1 fails 00:11:32.315 00:11:32.315 Latency(us) 00:11:32.315 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:32.315 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:11:32.315 Job: Nvme0n1 ended in about 1.59 seconds with error 00:11:32.315 Verification LBA range: start 0x0 length 0x400 00:11:32.315 Nvme0n1 : 1.59 1063.02 66.44 40.30 0.00 57499.09 2200.14 1022611.26 00:11:32.315 =================================================================================================================== 00:11:32.315 Total : 1063.02 66.44 40.30 0.00 57499.09 2200.14 1022611.26 00:11:32.315 [2024-05-16 20:20:45.214891] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:11:32.315 20:20:45 nvmf_rdma.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 2984794 00:11:32.315 20:20:45 nvmf_rdma.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:11:32.315 20:20:45 nvmf_rdma.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:11:32.315 20:20:45 nvmf_rdma.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:11:32.315 20:20:45 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:11:32.315 20:20:45 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:11:32.315 20:20:45 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:11:32.315 20:20:45 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:11:32.315 { 00:11:32.315 "params": { 00:11:32.315 "name": "Nvme$subsystem", 00:11:32.315 "trtype": "$TEST_TRANSPORT", 00:11:32.315 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:32.315 "adrfam": "ipv4", 00:11:32.315 "trsvcid": "$NVMF_PORT", 00:11:32.315 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:32.315 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:32.315 "hdgst": ${hdgst:-false}, 00:11:32.315 "ddgst": ${ddgst:-false} 00:11:32.315 }, 00:11:32.315 "method": "bdev_nvme_attach_controller" 00:11:32.315 } 00:11:32.315 EOF 00:11:32.315 )") 00:11:32.315 20:20:45 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:11:32.315 20:20:45 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:11:32.315 20:20:45 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:11:32.315 20:20:45 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:11:32.315 "params": { 00:11:32.315 "name": "Nvme0", 00:11:32.315 "trtype": "rdma", 00:11:32.315 "traddr": "192.168.100.8", 00:11:32.315 "adrfam": "ipv4", 00:11:32.315 "trsvcid": "4420", 00:11:32.315 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:11:32.315 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:11:32.315 "hdgst": false, 00:11:32.315 "ddgst": false 00:11:32.315 }, 00:11:32.315 "method": "bdev_nvme_attach_controller" 00:11:32.315 }' 00:11:32.315 [2024-05-16 20:20:45.261350] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:11:32.315 [2024-05-16 20:20:45.261397] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2985047 ] 00:11:32.315 EAL: No free 2048 kB hugepages reported on node 1 00:11:32.574 [2024-05-16 20:20:45.321724] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:32.574 [2024-05-16 20:20:45.395448] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:32.574 Running I/O for 1 seconds... 00:11:33.949 00:11:33.949 Latency(us) 00:11:33.949 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:33.949 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:11:33.949 Verification LBA range: start 0x0 length 0x400 00:11:33.949 Nvme0n1 : 1.02 3006.56 187.91 0.00 0.00 20848.85 908.92 43191.34 00:11:33.949 =================================================================================================================== 00:11:33.949 Total : 3006.56 187.91 0.00 0.00 20848.85 908.92 43191.34 00:11:33.949 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 68: 2984794 Killed $rootdir/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "0") -q 64 -o 65536 -w verify -t 10 "${NO_HUGE[@]}" 00:11:33.949 20:20:46 nvmf_rdma.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:11:33.949 20:20:46 nvmf_rdma.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:11:33.949 20:20:46 nvmf_rdma.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:11:33.949 20:20:46 nvmf_rdma.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:11:33.949 20:20:46 nvmf_rdma.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:11:33.949 20:20:46 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:33.949 20:20:46 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:11:33.949 20:20:46 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:11:33.949 20:20:46 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:11:33.949 20:20:46 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:11:33.949 20:20:46 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:33.949 20:20:46 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:11:33.949 rmmod nvme_rdma 00:11:33.949 rmmod nvme_fabrics 00:11:33.949 20:20:46 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:33.949 20:20:46 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:11:33.949 20:20:46 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:11:33.949 20:20:46 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 2984530 ']' 00:11:33.949 20:20:46 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 2984530 00:11:33.949 20:20:46 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@946 -- # '[' -z 2984530 ']' 00:11:33.949 20:20:46 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@950 -- # kill -0 2984530 00:11:33.949 20:20:46 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@951 -- # uname 00:11:33.949 20:20:46 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:11:33.949 20:20:46 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2984530 00:11:33.949 20:20:46 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:11:33.949 20:20:46 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:11:33.949 20:20:46 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2984530' 00:11:33.949 killing process with pid 2984530 00:11:33.949 20:20:46 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@965 -- # kill 2984530 00:11:33.949 [2024-05-16 20:20:46.894998] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:11:33.949 20:20:46 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@970 -- # wait 2984530 00:11:34.207 [2024-05-16 20:20:46.973175] rdma.c:2885:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:11:34.207 [2024-05-16 20:20:47.153563] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:11:34.207 20:20:47 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:34.207 20:20:47 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:11:34.207 20:20:47 nvmf_rdma.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:11:34.207 00:11:34.207 real 0m11.179s 00:11:34.207 user 0m24.629s 00:11:34.207 sys 0m5.344s 00:11:34.207 20:20:47 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:34.207 20:20:47 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:34.207 ************************************ 00:11:34.207 END TEST nvmf_host_management 00:11:34.207 ************************************ 00:11:34.466 20:20:47 nvmf_rdma -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=rdma 00:11:34.466 20:20:47 nvmf_rdma -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:11:34.466 20:20:47 nvmf_rdma -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:34.466 20:20:47 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:11:34.466 ************************************ 00:11:34.466 START TEST nvmf_lvol 00:11:34.466 ************************************ 00:11:34.466 20:20:47 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=rdma 00:11:34.466 * Looking for test storage... 00:11:34.466 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:11:34.466 20:20:47 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:11:34.466 20:20:47 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:11:34.466 20:20:47 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:34.466 20:20:47 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:34.466 20:20:47 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:34.466 20:20:47 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:34.466 20:20:47 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:34.466 20:20:47 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:34.466 20:20:47 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:34.466 20:20:47 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:34.466 20:20:47 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:34.466 20:20:47 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:34.466 20:20:47 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:11:34.466 20:20:47 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:11:34.466 20:20:47 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:34.466 20:20:47 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:34.466 20:20:47 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:34.466 20:20:47 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:34.466 20:20:47 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:11:34.466 20:20:47 nvmf_rdma.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:34.466 20:20:47 nvmf_rdma.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:34.466 20:20:47 nvmf_rdma.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:34.466 20:20:47 nvmf_rdma.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:34.466 20:20:47 nvmf_rdma.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:34.466 20:20:47 nvmf_rdma.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:34.466 20:20:47 nvmf_rdma.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:11:34.466 20:20:47 nvmf_rdma.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:34.466 20:20:47 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:11:34.466 20:20:47 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:34.466 20:20:47 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:34.466 20:20:47 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:34.466 20:20:47 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:34.466 20:20:47 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:34.466 20:20:47 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:34.466 20:20:47 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:34.466 20:20:47 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:34.466 20:20:47 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:34.466 20:20:47 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:34.466 20:20:47 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:11:34.466 20:20:47 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:11:34.466 20:20:47 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:11:34.466 20:20:47 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:11:34.466 20:20:47 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:11:34.466 20:20:47 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:34.466 20:20:47 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:34.466 20:20:47 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:34.466 20:20:47 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:34.466 20:20:47 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:34.466 20:20:47 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:34.466 20:20:47 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:34.466 20:20:47 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:34.466 20:20:47 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:34.466 20:20:47 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@285 -- # xtrace_disable 00:11:34.466 20:20:47 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:11:39.738 20:20:52 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:39.738 20:20:52 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@291 -- # pci_devs=() 00:11:39.738 20:20:52 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:39.738 20:20:52 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:39.738 20:20:52 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:39.738 20:20:52 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:39.738 20:20:52 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:39.738 20:20:52 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@295 -- # net_devs=() 00:11:39.738 20:20:52 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:39.738 20:20:52 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@296 -- # e810=() 00:11:39.738 20:20:52 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@296 -- # local -ga e810 00:11:39.738 20:20:52 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@297 -- # x722=() 00:11:39.738 20:20:52 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@297 -- # local -ga x722 00:11:39.738 20:20:52 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@298 -- # mlx=() 00:11:39.738 20:20:52 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@298 -- # local -ga mlx 00:11:39.738 20:20:52 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:39.738 20:20:52 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:39.738 20:20:52 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:39.738 20:20:52 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:39.738 20:20:52 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:39.738 20:20:52 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:39.738 20:20:52 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:39.738 20:20:52 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:39.738 20:20:52 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:39.738 20:20:52 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:39.738 20:20:52 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:39.738 20:20:52 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:39.738 20:20:52 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:11:39.738 20:20:52 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:11:39.738 20:20:52 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:11:39.738 20:20:52 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:11:39.738 20:20:52 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:11:39.738 20:20:52 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:39.738 20:20:52 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:39.738 20:20:52 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:11:39.738 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:11:39.738 20:20:52 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:11:39.738 20:20:52 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:11:39.738 20:20:52 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:39.738 20:20:52 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:39.738 20:20:52 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:11:39.738 20:20:52 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:11:39.738 20:20:52 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:39.738 20:20:52 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:11:39.738 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:11:39.738 20:20:52 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:11:39.738 20:20:52 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:11:39.738 20:20:52 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:39.738 20:20:52 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:39.738 20:20:52 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:11:39.738 20:20:52 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:11:39.738 20:20:52 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:39.738 20:20:52 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:11:39.738 20:20:52 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:39.738 20:20:52 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:39.738 20:20:52 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:11:39.738 20:20:52 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:39.738 20:20:52 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:39.738 20:20:52 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:11:39.738 Found net devices under 0000:da:00.0: mlx_0_0 00:11:39.738 20:20:52 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:39.738 20:20:52 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:39.738 20:20:52 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:39.738 20:20:52 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:11:39.738 20:20:52 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:39.738 20:20:52 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:39.738 20:20:52 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:11:39.738 Found net devices under 0000:da:00.1: mlx_0_1 00:11:39.738 20:20:52 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:39.738 20:20:52 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:39.738 20:20:52 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@414 -- # is_hw=yes 00:11:39.738 20:20:52 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:39.738 20:20:52 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:11:39.738 20:20:52 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:11:39.738 20:20:52 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@420 -- # rdma_device_init 00:11:39.738 20:20:52 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:11:39.738 20:20:52 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@58 -- # uname 00:11:39.738 20:20:52 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:11:39.738 20:20:52 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@62 -- # modprobe ib_cm 00:11:39.738 20:20:52 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@63 -- # modprobe ib_core 00:11:39.738 20:20:52 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@64 -- # modprobe ib_umad 00:11:39.738 20:20:52 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:11:39.738 20:20:52 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@66 -- # modprobe iw_cm 00:11:39.738 20:20:52 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:11:39.738 20:20:52 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:11:39.738 20:20:52 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@502 -- # allocate_nic_ips 00:11:39.738 20:20:52 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:11:39.738 20:20:52 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@73 -- # get_rdma_if_list 00:11:39.738 20:20:52 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:39.738 20:20:52 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:11:39.738 20:20:52 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:11:39.738 20:20:52 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:39.738 20:20:52 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:11:39.738 20:20:52 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:39.738 20:20:52 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:39.738 20:20:52 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:39.738 20:20:52 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@104 -- # echo mlx_0_0 00:11:39.738 20:20:52 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@105 -- # continue 2 00:11:39.738 20:20:52 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:39.738 20:20:52 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:39.739 20:20:52 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:39.739 20:20:52 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:39.739 20:20:52 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:39.739 20:20:52 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@104 -- # echo mlx_0_1 00:11:39.739 20:20:52 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@105 -- # continue 2 00:11:39.739 20:20:52 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:11:39.739 20:20:52 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:11:39.739 20:20:52 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:11:39.739 20:20:52 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:39.739 20:20:52 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:11:39.739 20:20:52 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:39.739 20:20:52 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:11:39.739 20:20:52 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:11:39.739 20:20:52 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:11:39.739 258: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:39.739 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:11:39.739 altname enp218s0f0np0 00:11:39.739 altname ens818f0np0 00:11:39.739 inet 192.168.100.8/24 scope global mlx_0_0 00:11:39.739 valid_lft forever preferred_lft forever 00:11:39.739 20:20:52 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:11:39.739 20:20:52 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:11:39.739 20:20:52 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:11:39.739 20:20:52 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:11:39.739 20:20:52 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:39.739 20:20:52 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:39.739 20:20:52 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:11:39.739 20:20:52 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:11:39.739 20:20:52 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:11:39.739 259: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:39.739 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:11:39.739 altname enp218s0f1np1 00:11:39.739 altname ens818f1np1 00:11:39.739 inet 192.168.100.9/24 scope global mlx_0_1 00:11:39.739 valid_lft forever preferred_lft forever 00:11:39.739 20:20:52 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@422 -- # return 0 00:11:39.739 20:20:52 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:39.739 20:20:52 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:11:39.739 20:20:52 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:11:39.739 20:20:52 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:11:39.739 20:20:52 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@86 -- # get_rdma_if_list 00:11:39.739 20:20:52 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:39.739 20:20:52 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:11:39.739 20:20:52 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:11:39.739 20:20:52 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:39.739 20:20:52 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:11:39.739 20:20:52 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:39.739 20:20:52 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:39.739 20:20:52 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:39.739 20:20:52 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@104 -- # echo mlx_0_0 00:11:39.739 20:20:52 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@105 -- # continue 2 00:11:39.739 20:20:52 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:39.739 20:20:52 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:39.739 20:20:52 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:39.739 20:20:52 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:39.739 20:20:52 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:39.739 20:20:52 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@104 -- # echo mlx_0_1 00:11:39.739 20:20:52 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@105 -- # continue 2 00:11:39.739 20:20:52 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:11:39.739 20:20:52 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:11:39.739 20:20:52 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:11:39.739 20:20:52 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:39.739 20:20:52 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:11:39.739 20:20:52 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:39.739 20:20:52 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:11:39.739 20:20:52 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:11:39.739 20:20:52 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:11:39.739 20:20:52 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:11:39.739 20:20:52 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:39.739 20:20:52 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:39.739 20:20:52 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:11:39.739 192.168.100.9' 00:11:39.739 20:20:52 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:11:39.739 192.168.100.9' 00:11:39.739 20:20:52 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@457 -- # head -n 1 00:11:39.739 20:20:52 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:11:39.739 20:20:52 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:11:39.739 192.168.100.9' 00:11:39.739 20:20:52 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@458 -- # tail -n +2 00:11:39.739 20:20:52 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@458 -- # head -n 1 00:11:39.739 20:20:52 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:11:39.739 20:20:52 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:11:39.739 20:20:52 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:11:39.739 20:20:52 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:11:39.739 20:20:52 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:11:39.739 20:20:52 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:11:39.739 20:20:52 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:11:39.739 20:20:52 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:39.739 20:20:52 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@720 -- # xtrace_disable 00:11:39.739 20:20:52 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:11:39.739 20:20:52 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=2988677 00:11:39.739 20:20:52 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 2988677 00:11:39.739 20:20:52 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@827 -- # '[' -z 2988677 ']' 00:11:39.739 20:20:52 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:39.739 20:20:52 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:11:39.739 20:20:52 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@832 -- # local max_retries=100 00:11:39.739 20:20:52 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:39.739 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:39.739 20:20:52 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@836 -- # xtrace_disable 00:11:39.739 20:20:52 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:11:39.739 [2024-05-16 20:20:52.717654] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:11:39.739 [2024-05-16 20:20:52.717697] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:40.026 EAL: No free 2048 kB hugepages reported on node 1 00:11:40.026 [2024-05-16 20:20:52.778700] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:40.026 [2024-05-16 20:20:52.857849] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:40.026 [2024-05-16 20:20:52.857886] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:40.026 [2024-05-16 20:20:52.857894] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:40.026 [2024-05-16 20:20:52.857900] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:40.026 [2024-05-16 20:20:52.857905] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:40.026 [2024-05-16 20:20:52.857953] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:40.026 [2024-05-16 20:20:52.857969] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:40.026 [2024-05-16 20:20:52.857971] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:40.632 20:20:53 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:11:40.632 20:20:53 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@860 -- # return 0 00:11:40.632 20:20:53 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:40.632 20:20:53 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:40.632 20:20:53 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:11:40.632 20:20:53 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:40.632 20:20:53 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:11:40.890 [2024-05-16 20:20:53.743426] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xa04e10/0xa09300) succeed. 00:11:40.890 [2024-05-16 20:20:53.753784] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xa063b0/0xa4a990) succeed. 00:11:40.890 20:20:53 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:41.148 20:20:54 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:11:41.148 20:20:54 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:41.407 20:20:54 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:11:41.407 20:20:54 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:11:41.665 20:20:54 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:11:41.665 20:20:54 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=8120e45a-5241-49dd-844e-653ad547206a 00:11:41.665 20:20:54 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 8120e45a-5241-49dd-844e-653ad547206a lvol 20 00:11:41.924 20:20:54 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=86690359-066a-407f-a4d0-bf17c04ca4f6 00:11:41.924 20:20:54 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:11:42.183 20:20:54 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 86690359-066a-407f-a4d0-bf17c04ca4f6 00:11:42.183 20:20:55 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:11:42.442 [2024-05-16 20:20:55.301239] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:11:42.442 [2024-05-16 20:20:55.301600] rdma.c:3032:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:11:42.442 20:20:55 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:11:42.701 20:20:55 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=2989263 00:11:42.701 20:20:55 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:11:42.701 20:20:55 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:11:42.701 EAL: No free 2048 kB hugepages reported on node 1 00:11:43.637 20:20:56 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 86690359-066a-407f-a4d0-bf17c04ca4f6 MY_SNAPSHOT 00:11:43.895 20:20:56 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=eed27bbc-93ee-4679-8688-61c7316a2dfa 00:11:43.895 20:20:56 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 86690359-066a-407f-a4d0-bf17c04ca4f6 30 00:11:44.154 20:20:56 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone eed27bbc-93ee-4679-8688-61c7316a2dfa MY_CLONE 00:11:44.154 20:20:57 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=8a96e213-ed4d-4d75-8822-67415cfe5186 00:11:44.154 20:20:57 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 8a96e213-ed4d-4d75-8822-67415cfe5186 00:11:44.412 20:20:57 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 2989263 00:11:54.389 Initializing NVMe Controllers 00:11:54.389 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode0 00:11:54.389 Controller IO queue size 128, less than required. 00:11:54.389 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:11:54.389 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:11:54.389 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:11:54.389 Initialization complete. Launching workers. 00:11:54.389 ======================================================== 00:11:54.389 Latency(us) 00:11:54.389 Device Information : IOPS MiB/s Average min max 00:11:54.389 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 15874.60 62.01 8065.63 2125.00 47778.72 00:11:54.389 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 15889.00 62.07 8057.92 2979.81 38005.51 00:11:54.389 ======================================================== 00:11:54.389 Total : 31763.60 124.08 8061.78 2125.00 47778.72 00:11:54.389 00:11:54.389 20:21:06 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:11:54.389 20:21:07 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 86690359-066a-407f-a4d0-bf17c04ca4f6 00:11:54.389 20:21:07 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 8120e45a-5241-49dd-844e-653ad547206a 00:11:54.389 20:21:07 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:11:54.389 20:21:07 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:11:54.389 20:21:07 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:11:54.389 20:21:07 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:54.389 20:21:07 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:11:54.389 20:21:07 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:11:54.389 20:21:07 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:11:54.389 20:21:07 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:11:54.389 20:21:07 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:54.389 20:21:07 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:11:54.389 rmmod nvme_rdma 00:11:54.646 rmmod nvme_fabrics 00:11:54.646 20:21:07 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:54.647 20:21:07 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:11:54.647 20:21:07 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:11:54.647 20:21:07 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 2988677 ']' 00:11:54.647 20:21:07 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 2988677 00:11:54.647 20:21:07 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@946 -- # '[' -z 2988677 ']' 00:11:54.647 20:21:07 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@950 -- # kill -0 2988677 00:11:54.647 20:21:07 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@951 -- # uname 00:11:54.647 20:21:07 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:11:54.647 20:21:07 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2988677 00:11:54.647 20:21:07 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:11:54.647 20:21:07 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:11:54.647 20:21:07 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2988677' 00:11:54.647 killing process with pid 2988677 00:11:54.647 20:21:07 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@965 -- # kill 2988677 00:11:54.647 [2024-05-16 20:21:07.465709] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:11:54.647 20:21:07 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@970 -- # wait 2988677 00:11:54.647 [2024-05-16 20:21:07.535150] rdma.c:2885:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:11:54.906 20:21:07 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:54.906 20:21:07 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:11:54.906 00:11:54.906 real 0m20.508s 00:11:54.906 user 1m10.480s 00:11:54.906 sys 0m5.003s 00:11:54.906 20:21:07 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:54.906 20:21:07 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:11:54.906 ************************************ 00:11:54.906 END TEST nvmf_lvol 00:11:54.906 ************************************ 00:11:54.906 20:21:07 nvmf_rdma -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=rdma 00:11:54.906 20:21:07 nvmf_rdma -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:11:54.906 20:21:07 nvmf_rdma -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:54.906 20:21:07 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:11:54.906 ************************************ 00:11:54.906 START TEST nvmf_lvs_grow 00:11:54.906 ************************************ 00:11:54.906 20:21:07 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=rdma 00:11:55.165 * Looking for test storage... 00:11:55.165 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:11:55.165 20:21:07 nvmf_rdma.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:11:55.165 20:21:07 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:11:55.165 20:21:07 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:55.165 20:21:07 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:55.165 20:21:07 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:55.165 20:21:07 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:55.165 20:21:07 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:55.165 20:21:07 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:55.165 20:21:07 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:55.165 20:21:07 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:55.165 20:21:07 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:55.165 20:21:07 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:55.165 20:21:07 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:11:55.165 20:21:07 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:11:55.165 20:21:07 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:55.165 20:21:07 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:55.166 20:21:07 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:55.166 20:21:07 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:55.166 20:21:07 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:11:55.166 20:21:07 nvmf_rdma.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:55.166 20:21:07 nvmf_rdma.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:55.166 20:21:07 nvmf_rdma.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:55.166 20:21:07 nvmf_rdma.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:55.166 20:21:07 nvmf_rdma.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:55.166 20:21:07 nvmf_rdma.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:55.166 20:21:07 nvmf_rdma.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:11:55.166 20:21:07 nvmf_rdma.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:55.166 20:21:07 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:11:55.166 20:21:07 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:55.166 20:21:07 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:55.166 20:21:07 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:55.166 20:21:07 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:55.166 20:21:07 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:55.166 20:21:07 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:55.166 20:21:07 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:55.166 20:21:07 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:55.166 20:21:07 nvmf_rdma.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:11:55.166 20:21:07 nvmf_rdma.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:11:55.166 20:21:07 nvmf_rdma.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:11:55.166 20:21:07 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:11:55.166 20:21:07 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:55.166 20:21:07 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:55.166 20:21:07 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:55.166 20:21:07 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:55.166 20:21:07 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:55.166 20:21:07 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:55.166 20:21:07 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:55.166 20:21:07 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:55.166 20:21:07 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:55.166 20:21:07 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@285 -- # xtrace_disable 00:11:55.166 20:21:07 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:12:01.736 20:21:13 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:01.736 20:21:13 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@291 -- # pci_devs=() 00:12:01.736 20:21:13 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:01.736 20:21:13 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:01.736 20:21:13 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:01.736 20:21:13 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:01.736 20:21:13 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:01.736 20:21:13 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@295 -- # net_devs=() 00:12:01.736 20:21:13 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:01.736 20:21:13 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@296 -- # e810=() 00:12:01.736 20:21:13 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@296 -- # local -ga e810 00:12:01.736 20:21:13 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@297 -- # x722=() 00:12:01.736 20:21:13 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@297 -- # local -ga x722 00:12:01.736 20:21:13 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@298 -- # mlx=() 00:12:01.736 20:21:13 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@298 -- # local -ga mlx 00:12:01.736 20:21:13 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:01.736 20:21:13 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:01.736 20:21:13 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:01.737 20:21:13 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:01.737 20:21:13 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:01.737 20:21:13 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:01.737 20:21:13 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:01.737 20:21:13 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:01.737 20:21:13 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:01.737 20:21:13 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:01.737 20:21:13 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:01.737 20:21:13 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:01.737 20:21:13 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:12:01.737 20:21:13 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:12:01.737 20:21:13 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:12:01.737 20:21:13 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:12:01.737 20:21:13 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:12:01.737 20:21:13 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:01.737 20:21:13 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:01.737 20:21:13 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:12:01.737 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:12:01.737 20:21:13 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:12:01.737 20:21:13 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:12:01.737 20:21:13 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:12:01.737 20:21:13 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:12:01.737 20:21:13 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:12:01.737 20:21:13 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:12:01.737 20:21:13 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:01.737 20:21:13 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:12:01.737 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:12:01.737 20:21:13 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:12:01.737 20:21:13 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:12:01.737 20:21:13 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:12:01.737 20:21:13 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:12:01.737 20:21:13 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:12:01.737 20:21:13 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:12:01.737 20:21:13 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:01.737 20:21:13 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:12:01.737 20:21:13 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:01.737 20:21:13 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:01.737 20:21:13 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:12:01.737 20:21:13 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:01.737 20:21:13 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:01.737 20:21:13 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:12:01.737 Found net devices under 0000:da:00.0: mlx_0_0 00:12:01.737 20:21:13 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:01.737 20:21:13 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:01.737 20:21:13 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:01.737 20:21:13 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:12:01.737 20:21:13 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:01.737 20:21:13 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:01.737 20:21:13 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:12:01.737 Found net devices under 0000:da:00.1: mlx_0_1 00:12:01.737 20:21:13 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:01.737 20:21:13 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:01.737 20:21:13 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@414 -- # is_hw=yes 00:12:01.737 20:21:13 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:01.737 20:21:13 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:12:01.737 20:21:13 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:12:01.737 20:21:13 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@420 -- # rdma_device_init 00:12:01.737 20:21:13 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:12:01.737 20:21:13 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@58 -- # uname 00:12:01.737 20:21:13 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:12:01.737 20:21:13 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@62 -- # modprobe ib_cm 00:12:01.737 20:21:13 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@63 -- # modprobe ib_core 00:12:01.737 20:21:13 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@64 -- # modprobe ib_umad 00:12:01.737 20:21:13 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:12:01.737 20:21:13 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@66 -- # modprobe iw_cm 00:12:01.737 20:21:13 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:12:01.737 20:21:13 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:12:01.737 20:21:13 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@502 -- # allocate_nic_ips 00:12:01.737 20:21:13 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:12:01.737 20:21:13 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@73 -- # get_rdma_if_list 00:12:01.737 20:21:13 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:01.737 20:21:13 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:12:01.737 20:21:13 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:12:01.737 20:21:13 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:01.737 20:21:13 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:12:01.737 20:21:13 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:01.737 20:21:13 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:01.737 20:21:13 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:01.737 20:21:13 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@104 -- # echo mlx_0_0 00:12:01.737 20:21:13 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@105 -- # continue 2 00:12:01.737 20:21:13 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:01.737 20:21:13 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:01.737 20:21:13 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:01.737 20:21:13 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:01.737 20:21:13 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:01.737 20:21:13 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@104 -- # echo mlx_0_1 00:12:01.737 20:21:13 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@105 -- # continue 2 00:12:01.737 20:21:13 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:12:01.737 20:21:13 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:12:01.737 20:21:13 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:12:01.737 20:21:13 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:12:01.737 20:21:13 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:01.737 20:21:13 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:01.737 20:21:13 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:12:01.737 20:21:14 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:12:01.737 20:21:14 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:12:01.737 258: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:01.737 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:12:01.737 altname enp218s0f0np0 00:12:01.737 altname ens818f0np0 00:12:01.737 inet 192.168.100.8/24 scope global mlx_0_0 00:12:01.737 valid_lft forever preferred_lft forever 00:12:01.737 20:21:14 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:12:01.737 20:21:14 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:12:01.737 20:21:14 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:12:01.737 20:21:14 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:12:01.737 20:21:14 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:01.737 20:21:14 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:01.737 20:21:14 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:12:01.737 20:21:14 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:12:01.737 20:21:14 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:12:01.737 259: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:01.737 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:12:01.737 altname enp218s0f1np1 00:12:01.737 altname ens818f1np1 00:12:01.737 inet 192.168.100.9/24 scope global mlx_0_1 00:12:01.737 valid_lft forever preferred_lft forever 00:12:01.737 20:21:14 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@422 -- # return 0 00:12:01.737 20:21:14 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:01.737 20:21:14 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:12:01.737 20:21:14 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:12:01.737 20:21:14 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:12:01.737 20:21:14 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@86 -- # get_rdma_if_list 00:12:01.737 20:21:14 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:01.737 20:21:14 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:12:01.737 20:21:14 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:12:01.737 20:21:14 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:01.737 20:21:14 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:12:01.737 20:21:14 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:01.737 20:21:14 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:01.737 20:21:14 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:01.738 20:21:14 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@104 -- # echo mlx_0_0 00:12:01.738 20:21:14 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@105 -- # continue 2 00:12:01.738 20:21:14 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:01.738 20:21:14 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:01.738 20:21:14 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:01.738 20:21:14 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:01.738 20:21:14 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:01.738 20:21:14 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@104 -- # echo mlx_0_1 00:12:01.738 20:21:14 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@105 -- # continue 2 00:12:01.738 20:21:14 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:12:01.738 20:21:14 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:12:01.738 20:21:14 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:12:01.738 20:21:14 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:12:01.738 20:21:14 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:01.738 20:21:14 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:01.738 20:21:14 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:12:01.738 20:21:14 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:12:01.738 20:21:14 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:12:01.738 20:21:14 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:12:01.738 20:21:14 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:01.738 20:21:14 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:01.738 20:21:14 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:12:01.738 192.168.100.9' 00:12:01.738 20:21:14 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:12:01.738 192.168.100.9' 00:12:01.738 20:21:14 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@457 -- # head -n 1 00:12:01.738 20:21:14 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:12:01.738 20:21:14 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:12:01.738 192.168.100.9' 00:12:01.738 20:21:14 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@458 -- # tail -n +2 00:12:01.738 20:21:14 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@458 -- # head -n 1 00:12:01.738 20:21:14 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:12:01.738 20:21:14 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:12:01.738 20:21:14 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:12:01.738 20:21:14 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:12:01.738 20:21:14 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:12:01.738 20:21:14 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:12:01.738 20:21:14 nvmf_rdma.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:12:01.738 20:21:14 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:01.738 20:21:14 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@720 -- # xtrace_disable 00:12:01.738 20:21:14 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:12:01.738 20:21:14 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=2995279 00:12:01.738 20:21:14 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 2995279 00:12:01.738 20:21:14 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:12:01.738 20:21:14 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # '[' -z 2995279 ']' 00:12:01.738 20:21:14 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:01.738 20:21:14 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:01.738 20:21:14 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:01.738 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:01.738 20:21:14 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:01.738 20:21:14 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:12:01.738 [2024-05-16 20:21:14.157744] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:12:01.738 [2024-05-16 20:21:14.157791] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:01.738 EAL: No free 2048 kB hugepages reported on node 1 00:12:01.738 [2024-05-16 20:21:14.218130] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:01.738 [2024-05-16 20:21:14.293415] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:01.738 [2024-05-16 20:21:14.293462] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:01.738 [2024-05-16 20:21:14.293469] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:01.738 [2024-05-16 20:21:14.293475] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:01.738 [2024-05-16 20:21:14.293480] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:01.738 [2024-05-16 20:21:14.293508] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:01.996 20:21:14 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:01.997 20:21:14 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@860 -- # return 0 00:12:01.997 20:21:14 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:01.997 20:21:14 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:01.997 20:21:14 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:12:02.255 20:21:14 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:02.255 20:21:14 nvmf_rdma.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:12:02.255 [2024-05-16 20:21:15.167842] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xd537e0/0xd57cd0) succeed. 00:12:02.255 [2024-05-16 20:21:15.178609] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xd54ce0/0xd99360) succeed. 00:12:02.255 20:21:15 nvmf_rdma.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:12:02.255 20:21:15 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:12:02.255 20:21:15 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:02.255 20:21:15 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:12:02.515 ************************************ 00:12:02.515 START TEST lvs_grow_clean 00:12:02.515 ************************************ 00:12:02.515 20:21:15 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1121 -- # lvs_grow 00:12:02.515 20:21:15 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:12:02.515 20:21:15 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:12:02.515 20:21:15 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:12:02.515 20:21:15 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:12:02.515 20:21:15 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:12:02.515 20:21:15 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:12:02.515 20:21:15 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:02.515 20:21:15 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:02.515 20:21:15 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:12:02.515 20:21:15 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:12:02.515 20:21:15 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:12:02.773 20:21:15 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=f4de12d7-edac-4d3d-9cfa-fb7bea467767 00:12:02.773 20:21:15 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f4de12d7-edac-4d3d-9cfa-fb7bea467767 00:12:02.773 20:21:15 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:12:03.031 20:21:15 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:12:03.031 20:21:15 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:12:03.031 20:21:15 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u f4de12d7-edac-4d3d-9cfa-fb7bea467767 lvol 150 00:12:03.031 20:21:15 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=3e22cea8-9fb5-4a8a-88cd-d95484cc87fa 00:12:03.031 20:21:15 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:03.031 20:21:16 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:12:03.289 [2024-05-16 20:21:16.142087] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:12:03.289 [2024-05-16 20:21:16.142138] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:12:03.289 true 00:12:03.289 20:21:16 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f4de12d7-edac-4d3d-9cfa-fb7bea467767 00:12:03.289 20:21:16 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:12:03.548 20:21:16 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:12:03.548 20:21:16 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:12:03.548 20:21:16 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 3e22cea8-9fb5-4a8a-88cd-d95484cc87fa 00:12:03.806 20:21:16 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:12:04.064 [2024-05-16 20:21:16.828026] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:12:04.064 [2024-05-16 20:21:16.828419] rdma.c:3032:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:12:04.064 20:21:16 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:12:04.064 20:21:16 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2995786 00:12:04.064 20:21:16 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:12:04.064 20:21:16 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:12:04.064 20:21:16 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2995786 /var/tmp/bdevperf.sock 00:12:04.064 20:21:16 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@827 -- # '[' -z 2995786 ']' 00:12:04.064 20:21:16 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:04.064 20:21:16 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:04.064 20:21:16 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:04.064 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:04.064 20:21:16 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:04.064 20:21:16 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:12:04.064 [2024-05-16 20:21:17.036660] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:12:04.064 [2024-05-16 20:21:17.036704] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2995786 ] 00:12:04.322 EAL: No free 2048 kB hugepages reported on node 1 00:12:04.322 [2024-05-16 20:21:17.096507] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:04.322 [2024-05-16 20:21:17.168049] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:04.888 20:21:17 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:04.888 20:21:17 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@860 -- # return 0 00:12:04.888 20:21:17 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:12:05.147 Nvme0n1 00:12:05.147 20:21:18 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:12:05.405 [ 00:12:05.405 { 00:12:05.405 "name": "Nvme0n1", 00:12:05.405 "aliases": [ 00:12:05.405 "3e22cea8-9fb5-4a8a-88cd-d95484cc87fa" 00:12:05.405 ], 00:12:05.405 "product_name": "NVMe disk", 00:12:05.405 "block_size": 4096, 00:12:05.405 "num_blocks": 38912, 00:12:05.405 "uuid": "3e22cea8-9fb5-4a8a-88cd-d95484cc87fa", 00:12:05.405 "assigned_rate_limits": { 00:12:05.405 "rw_ios_per_sec": 0, 00:12:05.405 "rw_mbytes_per_sec": 0, 00:12:05.405 "r_mbytes_per_sec": 0, 00:12:05.405 "w_mbytes_per_sec": 0 00:12:05.405 }, 00:12:05.405 "claimed": false, 00:12:05.405 "zoned": false, 00:12:05.405 "supported_io_types": { 00:12:05.405 "read": true, 00:12:05.405 "write": true, 00:12:05.405 "unmap": true, 00:12:05.405 "write_zeroes": true, 00:12:05.405 "flush": true, 00:12:05.405 "reset": true, 00:12:05.405 "compare": true, 00:12:05.405 "compare_and_write": true, 00:12:05.405 "abort": true, 00:12:05.405 "nvme_admin": true, 00:12:05.405 "nvme_io": true 00:12:05.405 }, 00:12:05.405 "memory_domains": [ 00:12:05.405 { 00:12:05.405 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:12:05.405 "dma_device_type": 0 00:12:05.405 } 00:12:05.405 ], 00:12:05.405 "driver_specific": { 00:12:05.405 "nvme": [ 00:12:05.405 { 00:12:05.405 "trid": { 00:12:05.405 "trtype": "RDMA", 00:12:05.405 "adrfam": "IPv4", 00:12:05.405 "traddr": "192.168.100.8", 00:12:05.405 "trsvcid": "4420", 00:12:05.405 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:12:05.405 }, 00:12:05.405 "ctrlr_data": { 00:12:05.405 "cntlid": 1, 00:12:05.405 "vendor_id": "0x8086", 00:12:05.405 "model_number": "SPDK bdev Controller", 00:12:05.405 "serial_number": "SPDK0", 00:12:05.405 "firmware_revision": "24.09", 00:12:05.405 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:12:05.405 "oacs": { 00:12:05.405 "security": 0, 00:12:05.405 "format": 0, 00:12:05.405 "firmware": 0, 00:12:05.405 "ns_manage": 0 00:12:05.405 }, 00:12:05.405 "multi_ctrlr": true, 00:12:05.405 "ana_reporting": false 00:12:05.405 }, 00:12:05.405 "vs": { 00:12:05.405 "nvme_version": "1.3" 00:12:05.405 }, 00:12:05.405 "ns_data": { 00:12:05.405 "id": 1, 00:12:05.405 "can_share": true 00:12:05.405 } 00:12:05.405 } 00:12:05.405 ], 00:12:05.405 "mp_policy": "active_passive" 00:12:05.405 } 00:12:05.405 } 00:12:05.405 ] 00:12:05.405 20:21:18 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2996020 00:12:05.405 20:21:18 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:12:05.405 20:21:18 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:12:05.405 Running I/O for 10 seconds... 00:12:06.359 Latency(us) 00:12:06.359 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:06.359 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:06.359 Nvme0n1 : 1.00 34430.00 134.49 0.00 0.00 0.00 0.00 0.00 00:12:06.359 =================================================================================================================== 00:12:06.359 Total : 34430.00 134.49 0.00 0.00 0.00 0.00 0.00 00:12:06.359 00:12:07.295 20:21:20 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u f4de12d7-edac-4d3d-9cfa-fb7bea467767 00:12:07.553 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:07.553 Nvme0n1 : 2.00 34590.00 135.12 0.00 0.00 0.00 0.00 0.00 00:12:07.553 =================================================================================================================== 00:12:07.553 Total : 34590.00 135.12 0.00 0.00 0.00 0.00 0.00 00:12:07.553 00:12:07.553 true 00:12:07.553 20:21:20 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f4de12d7-edac-4d3d-9cfa-fb7bea467767 00:12:07.553 20:21:20 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:12:07.811 20:21:20 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:12:07.811 20:21:20 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:12:07.811 20:21:20 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 2996020 00:12:08.378 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:08.378 Nvme0n1 : 3.00 34678.67 135.46 0.00 0.00 0.00 0.00 0.00 00:12:08.378 =================================================================================================================== 00:12:08.378 Total : 34678.67 135.46 0.00 0.00 0.00 0.00 0.00 00:12:08.378 00:12:09.755 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:09.755 Nvme0n1 : 4.00 34823.50 136.03 0.00 0.00 0.00 0.00 0.00 00:12:09.755 =================================================================================================================== 00:12:09.755 Total : 34823.50 136.03 0.00 0.00 0.00 0.00 0.00 00:12:09.755 00:12:10.691 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:10.691 Nvme0n1 : 5.00 34931.60 136.45 0.00 0.00 0.00 0.00 0.00 00:12:10.691 =================================================================================================================== 00:12:10.691 Total : 34931.60 136.45 0.00 0.00 0.00 0.00 0.00 00:12:10.691 00:12:11.627 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:11.627 Nvme0n1 : 6.00 35013.33 136.77 0.00 0.00 0.00 0.00 0.00 00:12:11.627 =================================================================================================================== 00:12:11.627 Total : 35013.33 136.77 0.00 0.00 0.00 0.00 0.00 00:12:11.627 00:12:12.564 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:12.564 Nvme0n1 : 7.00 35071.86 137.00 0.00 0.00 0.00 0.00 0.00 00:12:12.564 =================================================================================================================== 00:12:12.564 Total : 35071.86 137.00 0.00 0.00 0.00 0.00 0.00 00:12:12.564 00:12:13.501 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:13.501 Nvme0n1 : 8.00 35116.12 137.17 0.00 0.00 0.00 0.00 0.00 00:12:13.501 =================================================================================================================== 00:12:13.501 Total : 35116.12 137.17 0.00 0.00 0.00 0.00 0.00 00:12:13.501 00:12:14.437 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:14.437 Nvme0n1 : 9.00 35150.78 137.31 0.00 0.00 0.00 0.00 0.00 00:12:14.437 =================================================================================================================== 00:12:14.437 Total : 35150.78 137.31 0.00 0.00 0.00 0.00 0.00 00:12:14.437 00:12:15.372 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:15.372 Nvme0n1 : 10.00 35173.70 137.40 0.00 0.00 0.00 0.00 0.00 00:12:15.372 =================================================================================================================== 00:12:15.372 Total : 35173.70 137.40 0.00 0.00 0.00 0.00 0.00 00:12:15.372 00:12:15.632 00:12:15.632 Latency(us) 00:12:15.632 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:15.632 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:15.632 Nvme0n1 : 10.00 35172.46 137.39 0.00 0.00 3635.91 2793.08 12233.39 00:12:15.632 =================================================================================================================== 00:12:15.632 Total : 35172.46 137.39 0.00 0.00 3635.91 2793.08 12233.39 00:12:15.632 0 00:12:15.632 20:21:28 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2995786 00:12:15.632 20:21:28 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@946 -- # '[' -z 2995786 ']' 00:12:15.632 20:21:28 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@950 -- # kill -0 2995786 00:12:15.632 20:21:28 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@951 -- # uname 00:12:15.632 20:21:28 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:15.632 20:21:28 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2995786 00:12:15.632 20:21:28 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:12:15.632 20:21:28 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:12:15.632 20:21:28 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2995786' 00:12:15.632 killing process with pid 2995786 00:12:15.632 20:21:28 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@965 -- # kill 2995786 00:12:15.632 Received shutdown signal, test time was about 10.000000 seconds 00:12:15.632 00:12:15.632 Latency(us) 00:12:15.632 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:15.632 =================================================================================================================== 00:12:15.632 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:15.632 20:21:28 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@970 -- # wait 2995786 00:12:15.632 20:21:28 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:12:15.890 20:21:28 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:12:16.149 20:21:28 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:12:16.149 20:21:28 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f4de12d7-edac-4d3d-9cfa-fb7bea467767 00:12:16.408 20:21:29 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:12:16.408 20:21:29 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:12:16.408 20:21:29 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:12:16.408 [2024-05-16 20:21:29.302271] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:12:16.408 20:21:29 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f4de12d7-edac-4d3d-9cfa-fb7bea467767 00:12:16.408 20:21:29 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@648 -- # local es=0 00:12:16.408 20:21:29 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f4de12d7-edac-4d3d-9cfa-fb7bea467767 00:12:16.408 20:21:29 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:12:16.408 20:21:29 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:16.408 20:21:29 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:12:16.408 20:21:29 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:16.408 20:21:29 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:12:16.408 20:21:29 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:16.408 20:21:29 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:12:16.408 20:21:29 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:12:16.408 20:21:29 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f4de12d7-edac-4d3d-9cfa-fb7bea467767 00:12:16.667 request: 00:12:16.667 { 00:12:16.667 "uuid": "f4de12d7-edac-4d3d-9cfa-fb7bea467767", 00:12:16.667 "method": "bdev_lvol_get_lvstores", 00:12:16.667 "req_id": 1 00:12:16.667 } 00:12:16.667 Got JSON-RPC error response 00:12:16.667 response: 00:12:16.667 { 00:12:16.667 "code": -19, 00:12:16.667 "message": "No such device" 00:12:16.667 } 00:12:16.667 20:21:29 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # es=1 00:12:16.667 20:21:29 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:16.667 20:21:29 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:16.667 20:21:29 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:16.667 20:21:29 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:12:16.926 aio_bdev 00:12:16.926 20:21:29 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 3e22cea8-9fb5-4a8a-88cd-d95484cc87fa 00:12:16.926 20:21:29 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@895 -- # local bdev_name=3e22cea8-9fb5-4a8a-88cd-d95484cc87fa 00:12:16.926 20:21:29 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:12:16.926 20:21:29 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@897 -- # local i 00:12:16.926 20:21:29 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:12:16.926 20:21:29 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:12:16.926 20:21:29 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:12:16.926 20:21:29 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 3e22cea8-9fb5-4a8a-88cd-d95484cc87fa -t 2000 00:12:17.187 [ 00:12:17.187 { 00:12:17.187 "name": "3e22cea8-9fb5-4a8a-88cd-d95484cc87fa", 00:12:17.187 "aliases": [ 00:12:17.187 "lvs/lvol" 00:12:17.187 ], 00:12:17.187 "product_name": "Logical Volume", 00:12:17.187 "block_size": 4096, 00:12:17.187 "num_blocks": 38912, 00:12:17.187 "uuid": "3e22cea8-9fb5-4a8a-88cd-d95484cc87fa", 00:12:17.187 "assigned_rate_limits": { 00:12:17.187 "rw_ios_per_sec": 0, 00:12:17.187 "rw_mbytes_per_sec": 0, 00:12:17.187 "r_mbytes_per_sec": 0, 00:12:17.187 "w_mbytes_per_sec": 0 00:12:17.187 }, 00:12:17.187 "claimed": false, 00:12:17.187 "zoned": false, 00:12:17.187 "supported_io_types": { 00:12:17.187 "read": true, 00:12:17.187 "write": true, 00:12:17.187 "unmap": true, 00:12:17.187 "write_zeroes": true, 00:12:17.187 "flush": false, 00:12:17.187 "reset": true, 00:12:17.187 "compare": false, 00:12:17.187 "compare_and_write": false, 00:12:17.187 "abort": false, 00:12:17.187 "nvme_admin": false, 00:12:17.187 "nvme_io": false 00:12:17.187 }, 00:12:17.187 "driver_specific": { 00:12:17.187 "lvol": { 00:12:17.187 "lvol_store_uuid": "f4de12d7-edac-4d3d-9cfa-fb7bea467767", 00:12:17.187 "base_bdev": "aio_bdev", 00:12:17.187 "thin_provision": false, 00:12:17.187 "num_allocated_clusters": 38, 00:12:17.187 "snapshot": false, 00:12:17.187 "clone": false, 00:12:17.187 "esnap_clone": false 00:12:17.187 } 00:12:17.187 } 00:12:17.187 } 00:12:17.187 ] 00:12:17.187 20:21:30 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # return 0 00:12:17.187 20:21:30 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:12:17.187 20:21:30 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f4de12d7-edac-4d3d-9cfa-fb7bea467767 00:12:17.572 20:21:30 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:12:17.572 20:21:30 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f4de12d7-edac-4d3d-9cfa-fb7bea467767 00:12:17.572 20:21:30 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:12:17.572 20:21:30 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:12:17.572 20:21:30 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 3e22cea8-9fb5-4a8a-88cd-d95484cc87fa 00:12:17.863 20:21:30 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u f4de12d7-edac-4d3d-9cfa-fb7bea467767 00:12:17.863 20:21:30 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:12:18.121 20:21:30 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:18.121 00:12:18.121 real 0m15.647s 00:12:18.121 user 0m15.660s 00:12:18.121 sys 0m1.021s 00:12:18.121 20:21:30 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:18.121 20:21:30 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:12:18.121 ************************************ 00:12:18.121 END TEST lvs_grow_clean 00:12:18.121 ************************************ 00:12:18.121 20:21:30 nvmf_rdma.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:12:18.121 20:21:30 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:12:18.121 20:21:30 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:18.121 20:21:30 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:12:18.121 ************************************ 00:12:18.121 START TEST lvs_grow_dirty 00:12:18.121 ************************************ 00:12:18.121 20:21:31 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1121 -- # lvs_grow dirty 00:12:18.121 20:21:31 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:12:18.121 20:21:31 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:12:18.121 20:21:31 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:12:18.121 20:21:31 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:12:18.121 20:21:31 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:12:18.121 20:21:31 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:12:18.121 20:21:31 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:18.121 20:21:31 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:18.121 20:21:31 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:12:18.380 20:21:31 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:12:18.380 20:21:31 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:12:18.638 20:21:31 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=32c99b03-abab-40eb-89bd-d2ebc70bebe0 00:12:18.638 20:21:31 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 32c99b03-abab-40eb-89bd-d2ebc70bebe0 00:12:18.638 20:21:31 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:12:18.638 20:21:31 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:12:18.638 20:21:31 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:12:18.638 20:21:31 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 32c99b03-abab-40eb-89bd-d2ebc70bebe0 lvol 150 00:12:18.897 20:21:31 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=55b3b698-c8cd-47e2-a52c-6e5ab4c7d359 00:12:18.897 20:21:31 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:18.897 20:21:31 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:12:18.897 [2024-05-16 20:21:31.883025] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:12:18.897 [2024-05-16 20:21:31.883075] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:12:18.897 true 00:12:19.156 20:21:31 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 32c99b03-abab-40eb-89bd-d2ebc70bebe0 00:12:19.156 20:21:31 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:12:19.156 20:21:32 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:12:19.156 20:21:32 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:12:19.415 20:21:32 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 55b3b698-c8cd-47e2-a52c-6e5ab4c7d359 00:12:19.674 20:21:32 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:12:19.674 [2024-05-16 20:21:32.553204] rdma.c:3032:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:12:19.674 20:21:32 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:12:19.933 20:21:32 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2998387 00:12:19.933 20:21:32 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:12:19.933 20:21:32 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:12:19.933 20:21:32 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2998387 /var/tmp/bdevperf.sock 00:12:19.933 20:21:32 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@827 -- # '[' -z 2998387 ']' 00:12:19.933 20:21:32 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:19.933 20:21:32 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:19.933 20:21:32 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:19.933 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:19.933 20:21:32 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:19.933 20:21:32 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:12:19.933 [2024-05-16 20:21:32.764781] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:12:19.933 [2024-05-16 20:21:32.764832] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2998387 ] 00:12:19.933 EAL: No free 2048 kB hugepages reported on node 1 00:12:19.933 [2024-05-16 20:21:32.822776] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:19.933 [2024-05-16 20:21:32.900165] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:20.869 20:21:33 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:20.869 20:21:33 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # return 0 00:12:20.869 20:21:33 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:12:20.869 Nvme0n1 00:12:20.869 20:21:33 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:12:21.128 [ 00:12:21.128 { 00:12:21.128 "name": "Nvme0n1", 00:12:21.128 "aliases": [ 00:12:21.128 "55b3b698-c8cd-47e2-a52c-6e5ab4c7d359" 00:12:21.128 ], 00:12:21.128 "product_name": "NVMe disk", 00:12:21.128 "block_size": 4096, 00:12:21.128 "num_blocks": 38912, 00:12:21.128 "uuid": "55b3b698-c8cd-47e2-a52c-6e5ab4c7d359", 00:12:21.128 "assigned_rate_limits": { 00:12:21.128 "rw_ios_per_sec": 0, 00:12:21.128 "rw_mbytes_per_sec": 0, 00:12:21.128 "r_mbytes_per_sec": 0, 00:12:21.128 "w_mbytes_per_sec": 0 00:12:21.128 }, 00:12:21.128 "claimed": false, 00:12:21.128 "zoned": false, 00:12:21.128 "supported_io_types": { 00:12:21.128 "read": true, 00:12:21.128 "write": true, 00:12:21.128 "unmap": true, 00:12:21.128 "write_zeroes": true, 00:12:21.128 "flush": true, 00:12:21.128 "reset": true, 00:12:21.128 "compare": true, 00:12:21.128 "compare_and_write": true, 00:12:21.128 "abort": true, 00:12:21.128 "nvme_admin": true, 00:12:21.128 "nvme_io": true 00:12:21.128 }, 00:12:21.128 "memory_domains": [ 00:12:21.128 { 00:12:21.128 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:12:21.128 "dma_device_type": 0 00:12:21.128 } 00:12:21.128 ], 00:12:21.128 "driver_specific": { 00:12:21.128 "nvme": [ 00:12:21.128 { 00:12:21.128 "trid": { 00:12:21.128 "trtype": "RDMA", 00:12:21.128 "adrfam": "IPv4", 00:12:21.128 "traddr": "192.168.100.8", 00:12:21.128 "trsvcid": "4420", 00:12:21.128 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:12:21.128 }, 00:12:21.128 "ctrlr_data": { 00:12:21.128 "cntlid": 1, 00:12:21.128 "vendor_id": "0x8086", 00:12:21.128 "model_number": "SPDK bdev Controller", 00:12:21.128 "serial_number": "SPDK0", 00:12:21.128 "firmware_revision": "24.09", 00:12:21.128 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:12:21.128 "oacs": { 00:12:21.128 "security": 0, 00:12:21.128 "format": 0, 00:12:21.128 "firmware": 0, 00:12:21.128 "ns_manage": 0 00:12:21.128 }, 00:12:21.128 "multi_ctrlr": true, 00:12:21.128 "ana_reporting": false 00:12:21.128 }, 00:12:21.128 "vs": { 00:12:21.128 "nvme_version": "1.3" 00:12:21.128 }, 00:12:21.128 "ns_data": { 00:12:21.128 "id": 1, 00:12:21.128 "can_share": true 00:12:21.128 } 00:12:21.128 } 00:12:21.128 ], 00:12:21.128 "mp_policy": "active_passive" 00:12:21.128 } 00:12:21.128 } 00:12:21.128 ] 00:12:21.128 20:21:34 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:12:21.128 20:21:34 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2998619 00:12:21.128 20:21:34 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:12:21.128 Running I/O for 10 seconds... 00:12:22.504 Latency(us) 00:12:22.504 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:22.504 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:22.504 Nvme0n1 : 1.00 34400.00 134.38 0.00 0.00 0.00 0.00 0.00 00:12:22.504 =================================================================================================================== 00:12:22.504 Total : 34400.00 134.38 0.00 0.00 0.00 0.00 0.00 00:12:22.504 00:12:23.072 20:21:36 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 32c99b03-abab-40eb-89bd-d2ebc70bebe0 00:12:23.331 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:23.331 Nvme0n1 : 2.00 34756.00 135.77 0.00 0.00 0.00 0.00 0.00 00:12:23.331 =================================================================================================================== 00:12:23.331 Total : 34756.00 135.77 0.00 0.00 0.00 0.00 0.00 00:12:23.331 00:12:23.331 true 00:12:23.331 20:21:36 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 32c99b03-abab-40eb-89bd-d2ebc70bebe0 00:12:23.331 20:21:36 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:12:23.589 20:21:36 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:12:23.590 20:21:36 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:12:23.590 20:21:36 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 2998619 00:12:24.157 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:24.157 Nvme0n1 : 3.00 34888.67 136.28 0.00 0.00 0.00 0.00 0.00 00:12:24.157 =================================================================================================================== 00:12:24.157 Total : 34888.67 136.28 0.00 0.00 0.00 0.00 0.00 00:12:24.157 00:12:25.533 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:25.533 Nvme0n1 : 4.00 35010.00 136.76 0.00 0.00 0.00 0.00 0.00 00:12:25.533 =================================================================================================================== 00:12:25.533 Total : 35010.00 136.76 0.00 0.00 0.00 0.00 0.00 00:12:25.533 00:12:26.468 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:26.468 Nvme0n1 : 5.00 35086.00 137.05 0.00 0.00 0.00 0.00 0.00 00:12:26.468 =================================================================================================================== 00:12:26.468 Total : 35086.00 137.05 0.00 0.00 0.00 0.00 0.00 00:12:26.468 00:12:27.405 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:27.405 Nvme0n1 : 6.00 35105.00 137.13 0.00 0.00 0.00 0.00 0.00 00:12:27.405 =================================================================================================================== 00:12:27.405 Total : 35105.00 137.13 0.00 0.00 0.00 0.00 0.00 00:12:27.405 00:12:28.342 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:28.342 Nvme0n1 : 7.00 35089.71 137.07 0.00 0.00 0.00 0.00 0.00 00:12:28.342 =================================================================================================================== 00:12:28.342 Total : 35089.71 137.07 0.00 0.00 0.00 0.00 0.00 00:12:28.342 00:12:29.276 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:29.276 Nvme0n1 : 8.00 35128.75 137.22 0.00 0.00 0.00 0.00 0.00 00:12:29.276 =================================================================================================================== 00:12:29.276 Total : 35128.75 137.22 0.00 0.00 0.00 0.00 0.00 00:12:29.276 00:12:30.212 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:30.212 Nvme0n1 : 9.00 35165.33 137.36 0.00 0.00 0.00 0.00 0.00 00:12:30.212 =================================================================================================================== 00:12:30.212 Total : 35165.33 137.36 0.00 0.00 0.00 0.00 0.00 00:12:30.212 00:12:31.148 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:31.148 Nvme0n1 : 10.00 35196.30 137.49 0.00 0.00 0.00 0.00 0.00 00:12:31.148 =================================================================================================================== 00:12:31.148 Total : 35196.30 137.49 0.00 0.00 0.00 0.00 0.00 00:12:31.148 00:12:31.148 00:12:31.148 Latency(us) 00:12:31.149 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:31.149 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:31.149 Nvme0n1 : 10.00 35195.11 137.48 0.00 0.00 3633.56 2278.16 14542.75 00:12:31.149 =================================================================================================================== 00:12:31.149 Total : 35195.11 137.48 0.00 0.00 3633.56 2278.16 14542.75 00:12:31.149 0 00:12:31.408 20:21:44 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2998387 00:12:31.408 20:21:44 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@946 -- # '[' -z 2998387 ']' 00:12:31.408 20:21:44 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@950 -- # kill -0 2998387 00:12:31.408 20:21:44 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@951 -- # uname 00:12:31.408 20:21:44 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:31.408 20:21:44 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2998387 00:12:31.408 20:21:44 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:12:31.408 20:21:44 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:12:31.408 20:21:44 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2998387' 00:12:31.408 killing process with pid 2998387 00:12:31.408 20:21:44 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@965 -- # kill 2998387 00:12:31.408 Received shutdown signal, test time was about 10.000000 seconds 00:12:31.408 00:12:31.408 Latency(us) 00:12:31.409 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:31.409 =================================================================================================================== 00:12:31.409 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:31.409 20:21:44 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@970 -- # wait 2998387 00:12:31.409 20:21:44 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:12:31.667 20:21:44 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:12:31.956 20:21:44 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 32c99b03-abab-40eb-89bd-d2ebc70bebe0 00:12:31.956 20:21:44 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:12:31.956 20:21:44 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:12:31.956 20:21:44 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:12:31.956 20:21:44 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 2995279 00:12:31.956 20:21:44 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 2995279 00:12:32.217 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 2995279 Killed "${NVMF_APP[@]}" "$@" 00:12:32.217 20:21:44 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:12:32.217 20:21:44 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:12:32.217 20:21:44 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:32.217 20:21:44 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@720 -- # xtrace_disable 00:12:32.217 20:21:44 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:12:32.217 20:21:44 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=3000463 00:12:32.217 20:21:44 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 3000463 00:12:32.217 20:21:44 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:12:32.217 20:21:44 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@827 -- # '[' -z 3000463 ']' 00:12:32.217 20:21:44 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:32.217 20:21:44 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:32.217 20:21:44 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:32.217 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:32.217 20:21:44 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:32.217 20:21:44 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:12:32.217 [2024-05-16 20:21:45.017306] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:12:32.217 [2024-05-16 20:21:45.017353] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:32.217 EAL: No free 2048 kB hugepages reported on node 1 00:12:32.217 [2024-05-16 20:21:45.077660] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:32.217 [2024-05-16 20:21:45.155916] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:32.217 [2024-05-16 20:21:45.155949] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:32.217 [2024-05-16 20:21:45.155956] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:32.217 [2024-05-16 20:21:45.155962] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:32.217 [2024-05-16 20:21:45.155967] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:32.217 [2024-05-16 20:21:45.155983] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:33.155 20:21:45 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:33.155 20:21:45 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # return 0 00:12:33.155 20:21:45 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:33.155 20:21:45 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:33.155 20:21:45 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:12:33.155 20:21:45 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:33.155 20:21:45 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:12:33.155 [2024-05-16 20:21:46.001537] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:12:33.155 [2024-05-16 20:21:46.001634] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:12:33.155 [2024-05-16 20:21:46.001657] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:12:33.155 20:21:46 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:12:33.155 20:21:46 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 55b3b698-c8cd-47e2-a52c-6e5ab4c7d359 00:12:33.155 20:21:46 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@895 -- # local bdev_name=55b3b698-c8cd-47e2-a52c-6e5ab4c7d359 00:12:33.155 20:21:46 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:12:33.155 20:21:46 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local i 00:12:33.155 20:21:46 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:12:33.155 20:21:46 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:12:33.155 20:21:46 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:12:33.414 20:21:46 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 55b3b698-c8cd-47e2-a52c-6e5ab4c7d359 -t 2000 00:12:33.414 [ 00:12:33.414 { 00:12:33.414 "name": "55b3b698-c8cd-47e2-a52c-6e5ab4c7d359", 00:12:33.414 "aliases": [ 00:12:33.414 "lvs/lvol" 00:12:33.414 ], 00:12:33.414 "product_name": "Logical Volume", 00:12:33.414 "block_size": 4096, 00:12:33.414 "num_blocks": 38912, 00:12:33.414 "uuid": "55b3b698-c8cd-47e2-a52c-6e5ab4c7d359", 00:12:33.414 "assigned_rate_limits": { 00:12:33.414 "rw_ios_per_sec": 0, 00:12:33.414 "rw_mbytes_per_sec": 0, 00:12:33.414 "r_mbytes_per_sec": 0, 00:12:33.414 "w_mbytes_per_sec": 0 00:12:33.414 }, 00:12:33.414 "claimed": false, 00:12:33.414 "zoned": false, 00:12:33.414 "supported_io_types": { 00:12:33.414 "read": true, 00:12:33.414 "write": true, 00:12:33.414 "unmap": true, 00:12:33.414 "write_zeroes": true, 00:12:33.414 "flush": false, 00:12:33.414 "reset": true, 00:12:33.414 "compare": false, 00:12:33.414 "compare_and_write": false, 00:12:33.414 "abort": false, 00:12:33.414 "nvme_admin": false, 00:12:33.414 "nvme_io": false 00:12:33.414 }, 00:12:33.414 "driver_specific": { 00:12:33.414 "lvol": { 00:12:33.414 "lvol_store_uuid": "32c99b03-abab-40eb-89bd-d2ebc70bebe0", 00:12:33.414 "base_bdev": "aio_bdev", 00:12:33.414 "thin_provision": false, 00:12:33.414 "num_allocated_clusters": 38, 00:12:33.414 "snapshot": false, 00:12:33.414 "clone": false, 00:12:33.414 "esnap_clone": false 00:12:33.414 } 00:12:33.414 } 00:12:33.414 } 00:12:33.414 ] 00:12:33.414 20:21:46 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # return 0 00:12:33.414 20:21:46 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 32c99b03-abab-40eb-89bd-d2ebc70bebe0 00:12:33.414 20:21:46 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:12:33.673 20:21:46 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:12:33.674 20:21:46 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 32c99b03-abab-40eb-89bd-d2ebc70bebe0 00:12:33.674 20:21:46 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:12:33.932 20:21:46 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:12:33.932 20:21:46 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:12:33.932 [2024-05-16 20:21:46.846229] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:12:33.932 20:21:46 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 32c99b03-abab-40eb-89bd-d2ebc70bebe0 00:12:33.932 20:21:46 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@648 -- # local es=0 00:12:33.932 20:21:46 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 32c99b03-abab-40eb-89bd-d2ebc70bebe0 00:12:33.932 20:21:46 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:12:33.932 20:21:46 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:33.932 20:21:46 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:12:33.932 20:21:46 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:33.932 20:21:46 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:12:33.932 20:21:46 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:33.932 20:21:46 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:12:33.932 20:21:46 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:12:33.932 20:21:46 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 32c99b03-abab-40eb-89bd-d2ebc70bebe0 00:12:34.190 request: 00:12:34.190 { 00:12:34.190 "uuid": "32c99b03-abab-40eb-89bd-d2ebc70bebe0", 00:12:34.190 "method": "bdev_lvol_get_lvstores", 00:12:34.190 "req_id": 1 00:12:34.190 } 00:12:34.190 Got JSON-RPC error response 00:12:34.190 response: 00:12:34.190 { 00:12:34.190 "code": -19, 00:12:34.190 "message": "No such device" 00:12:34.190 } 00:12:34.190 20:21:47 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # es=1 00:12:34.190 20:21:47 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:34.190 20:21:47 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:34.190 20:21:47 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:34.190 20:21:47 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:12:34.450 aio_bdev 00:12:34.450 20:21:47 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 55b3b698-c8cd-47e2-a52c-6e5ab4c7d359 00:12:34.450 20:21:47 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@895 -- # local bdev_name=55b3b698-c8cd-47e2-a52c-6e5ab4c7d359 00:12:34.450 20:21:47 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:12:34.450 20:21:47 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local i 00:12:34.450 20:21:47 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:12:34.450 20:21:47 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:12:34.450 20:21:47 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:12:34.450 20:21:47 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 55b3b698-c8cd-47e2-a52c-6e5ab4c7d359 -t 2000 00:12:34.708 [ 00:12:34.708 { 00:12:34.708 "name": "55b3b698-c8cd-47e2-a52c-6e5ab4c7d359", 00:12:34.708 "aliases": [ 00:12:34.708 "lvs/lvol" 00:12:34.708 ], 00:12:34.708 "product_name": "Logical Volume", 00:12:34.708 "block_size": 4096, 00:12:34.708 "num_blocks": 38912, 00:12:34.708 "uuid": "55b3b698-c8cd-47e2-a52c-6e5ab4c7d359", 00:12:34.708 "assigned_rate_limits": { 00:12:34.708 "rw_ios_per_sec": 0, 00:12:34.708 "rw_mbytes_per_sec": 0, 00:12:34.708 "r_mbytes_per_sec": 0, 00:12:34.708 "w_mbytes_per_sec": 0 00:12:34.708 }, 00:12:34.708 "claimed": false, 00:12:34.708 "zoned": false, 00:12:34.708 "supported_io_types": { 00:12:34.708 "read": true, 00:12:34.708 "write": true, 00:12:34.708 "unmap": true, 00:12:34.708 "write_zeroes": true, 00:12:34.708 "flush": false, 00:12:34.708 "reset": true, 00:12:34.708 "compare": false, 00:12:34.708 "compare_and_write": false, 00:12:34.708 "abort": false, 00:12:34.708 "nvme_admin": false, 00:12:34.708 "nvme_io": false 00:12:34.708 }, 00:12:34.709 "driver_specific": { 00:12:34.709 "lvol": { 00:12:34.709 "lvol_store_uuid": "32c99b03-abab-40eb-89bd-d2ebc70bebe0", 00:12:34.709 "base_bdev": "aio_bdev", 00:12:34.709 "thin_provision": false, 00:12:34.709 "num_allocated_clusters": 38, 00:12:34.709 "snapshot": false, 00:12:34.709 "clone": false, 00:12:34.709 "esnap_clone": false 00:12:34.709 } 00:12:34.709 } 00:12:34.709 } 00:12:34.709 ] 00:12:34.709 20:21:47 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # return 0 00:12:34.709 20:21:47 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 32c99b03-abab-40eb-89bd-d2ebc70bebe0 00:12:34.709 20:21:47 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:12:34.967 20:21:47 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:12:34.967 20:21:47 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 32c99b03-abab-40eb-89bd-d2ebc70bebe0 00:12:34.967 20:21:47 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:12:34.967 20:21:47 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:12:34.967 20:21:47 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 55b3b698-c8cd-47e2-a52c-6e5ab4c7d359 00:12:35.225 20:21:48 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 32c99b03-abab-40eb-89bd-d2ebc70bebe0 00:12:35.484 20:21:48 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:12:35.484 20:21:48 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:35.484 00:12:35.484 real 0m17.454s 00:12:35.484 user 0m45.726s 00:12:35.484 sys 0m2.857s 00:12:35.484 20:21:48 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:35.484 20:21:48 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:12:35.484 ************************************ 00:12:35.484 END TEST lvs_grow_dirty 00:12:35.484 ************************************ 00:12:35.743 20:21:48 nvmf_rdma.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:12:35.743 20:21:48 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@804 -- # type=--id 00:12:35.743 20:21:48 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@805 -- # id=0 00:12:35.743 20:21:48 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@806 -- # '[' --id = --pid ']' 00:12:35.743 20:21:48 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:12:35.743 20:21:48 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # shm_files=nvmf_trace.0 00:12:35.743 20:21:48 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # [[ -z nvmf_trace.0 ]] 00:12:35.743 20:21:48 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # for n in $shm_files 00:12:35.743 20:21:48 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@817 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:12:35.743 nvmf_trace.0 00:12:35.743 20:21:48 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@819 -- # return 0 00:12:35.743 20:21:48 nvmf_rdma.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:12:35.743 20:21:48 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:35.743 20:21:48 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:12:35.743 20:21:48 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:12:35.743 20:21:48 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:12:35.743 20:21:48 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:12:35.743 20:21:48 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:35.743 20:21:48 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:12:35.743 rmmod nvme_rdma 00:12:35.743 rmmod nvme_fabrics 00:12:35.743 20:21:48 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:35.743 20:21:48 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:12:35.743 20:21:48 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:12:35.743 20:21:48 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 3000463 ']' 00:12:35.743 20:21:48 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 3000463 00:12:35.743 20:21:48 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@946 -- # '[' -z 3000463 ']' 00:12:35.743 20:21:48 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@950 -- # kill -0 3000463 00:12:35.743 20:21:48 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@951 -- # uname 00:12:35.743 20:21:48 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:35.743 20:21:48 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3000463 00:12:35.743 20:21:48 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:12:35.743 20:21:48 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:12:35.743 20:21:48 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3000463' 00:12:35.743 killing process with pid 3000463 00:12:35.743 20:21:48 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@965 -- # kill 3000463 00:12:35.743 20:21:48 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@970 -- # wait 3000463 00:12:36.003 20:21:48 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:36.003 20:21:48 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:12:36.003 00:12:36.003 real 0m40.955s 00:12:36.003 user 1m7.310s 00:12:36.003 sys 0m8.930s 00:12:36.003 20:21:48 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:36.003 20:21:48 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:12:36.003 ************************************ 00:12:36.003 END TEST nvmf_lvs_grow 00:12:36.003 ************************************ 00:12:36.003 20:21:48 nvmf_rdma -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=rdma 00:12:36.003 20:21:48 nvmf_rdma -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:12:36.003 20:21:48 nvmf_rdma -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:36.003 20:21:48 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:12:36.003 ************************************ 00:12:36.003 START TEST nvmf_bdev_io_wait 00:12:36.003 ************************************ 00:12:36.003 20:21:48 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=rdma 00:12:36.003 * Looking for test storage... 00:12:36.003 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:12:36.003 20:21:48 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:12:36.003 20:21:48 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:12:36.003 20:21:48 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:36.003 20:21:48 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:36.003 20:21:48 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:36.003 20:21:48 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:36.003 20:21:48 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:36.003 20:21:48 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:36.003 20:21:48 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:36.003 20:21:48 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:36.003 20:21:48 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:36.003 20:21:48 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:36.003 20:21:48 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:12:36.003 20:21:48 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:12:36.003 20:21:48 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:36.003 20:21:48 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:36.003 20:21:48 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:36.003 20:21:48 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:36.003 20:21:48 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:12:36.003 20:21:48 nvmf_rdma.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:36.003 20:21:48 nvmf_rdma.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:36.003 20:21:48 nvmf_rdma.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:36.003 20:21:48 nvmf_rdma.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:36.003 20:21:48 nvmf_rdma.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:36.003 20:21:48 nvmf_rdma.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:36.003 20:21:48 nvmf_rdma.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:12:36.003 20:21:48 nvmf_rdma.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:36.003 20:21:48 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:12:36.003 20:21:48 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:36.003 20:21:48 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:36.003 20:21:48 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:36.003 20:21:48 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:36.003 20:21:48 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:36.003 20:21:48 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:36.003 20:21:48 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:36.003 20:21:48 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:36.003 20:21:48 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:36.003 20:21:48 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:36.003 20:21:48 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:12:36.003 20:21:48 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:12:36.003 20:21:48 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:36.003 20:21:48 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:36.003 20:21:48 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:36.003 20:21:48 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:36.003 20:21:48 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:36.003 20:21:48 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:36.003 20:21:48 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:36.003 20:21:48 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:36.003 20:21:48 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:36.003 20:21:48 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@285 -- # xtrace_disable 00:12:36.003 20:21:48 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:42.575 20:21:54 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:42.575 20:21:54 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # pci_devs=() 00:12:42.575 20:21:54 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:42.575 20:21:54 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:42.575 20:21:54 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:42.575 20:21:54 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:42.575 20:21:54 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:42.575 20:21:54 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # net_devs=() 00:12:42.575 20:21:54 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:42.575 20:21:54 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # e810=() 00:12:42.575 20:21:54 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # local -ga e810 00:12:42.575 20:21:54 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # x722=() 00:12:42.575 20:21:54 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # local -ga x722 00:12:42.575 20:21:54 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # mlx=() 00:12:42.575 20:21:54 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # local -ga mlx 00:12:42.575 20:21:54 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:42.575 20:21:54 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:42.575 20:21:54 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:42.575 20:21:54 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:42.575 20:21:54 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:42.575 20:21:54 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:42.575 20:21:54 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:42.575 20:21:54 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:42.575 20:21:54 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:42.575 20:21:54 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:42.575 20:21:54 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:42.575 20:21:54 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:42.575 20:21:54 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:12:42.575 20:21:54 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:12:42.575 20:21:54 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:12:42.575 20:21:54 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:12:42.575 20:21:54 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:12:42.575 20:21:54 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:42.575 20:21:54 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:42.575 20:21:54 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:12:42.575 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:12:42.575 20:21:54 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:12:42.575 20:21:54 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:12:42.575 20:21:54 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:12:42.575 20:21:54 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:12:42.575 20:21:54 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:12:42.575 20:21:54 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:12:42.575 20:21:54 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:42.575 20:21:54 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:12:42.575 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:12:42.575 20:21:54 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:12:42.575 20:21:54 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:12:42.575 20:21:54 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:12:42.575 20:21:54 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:12:42.575 20:21:54 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:12:42.575 20:21:54 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:12:42.575 20:21:54 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:42.575 20:21:54 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:12:42.575 20:21:54 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:42.575 20:21:54 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:42.575 20:21:54 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:12:42.575 20:21:54 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:42.575 20:21:54 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:42.575 20:21:54 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:12:42.575 Found net devices under 0000:da:00.0: mlx_0_0 00:12:42.575 20:21:54 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:42.575 20:21:54 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:42.575 20:21:54 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:42.575 20:21:54 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:12:42.575 20:21:54 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:42.575 20:21:54 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:42.575 20:21:54 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:12:42.575 Found net devices under 0000:da:00.1: mlx_0_1 00:12:42.575 20:21:54 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:42.575 20:21:54 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:42.576 20:21:54 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # is_hw=yes 00:12:42.576 20:21:54 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:42.576 20:21:54 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:12:42.576 20:21:54 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:12:42.576 20:21:54 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@420 -- # rdma_device_init 00:12:42.576 20:21:54 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:12:42.576 20:21:54 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@58 -- # uname 00:12:42.576 20:21:54 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:12:42.576 20:21:54 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@62 -- # modprobe ib_cm 00:12:42.576 20:21:54 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@63 -- # modprobe ib_core 00:12:42.576 20:21:54 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@64 -- # modprobe ib_umad 00:12:42.576 20:21:54 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:12:42.576 20:21:54 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@66 -- # modprobe iw_cm 00:12:42.576 20:21:54 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:12:42.576 20:21:54 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:12:42.576 20:21:54 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # allocate_nic_ips 00:12:42.576 20:21:54 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:12:42.576 20:21:54 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@73 -- # get_rdma_if_list 00:12:42.576 20:21:54 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:42.576 20:21:54 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:12:42.576 20:21:54 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:12:42.576 20:21:54 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:42.576 20:21:54 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:12:42.576 20:21:54 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:42.576 20:21:54 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:42.576 20:21:54 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:42.576 20:21:54 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@104 -- # echo mlx_0_0 00:12:42.576 20:21:54 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@105 -- # continue 2 00:12:42.576 20:21:54 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:42.576 20:21:54 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:42.576 20:21:54 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:42.576 20:21:54 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:42.576 20:21:54 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:42.576 20:21:54 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@104 -- # echo mlx_0_1 00:12:42.576 20:21:54 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@105 -- # continue 2 00:12:42.576 20:21:54 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:12:42.576 20:21:54 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:12:42.576 20:21:54 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:12:42.576 20:21:54 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:12:42.576 20:21:54 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:42.576 20:21:54 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:42.576 20:21:54 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:12:42.576 20:21:54 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:12:42.576 20:21:54 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:12:42.576 258: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:42.576 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:12:42.576 altname enp218s0f0np0 00:12:42.576 altname ens818f0np0 00:12:42.576 inet 192.168.100.8/24 scope global mlx_0_0 00:12:42.576 valid_lft forever preferred_lft forever 00:12:42.576 20:21:54 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:12:42.576 20:21:54 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:12:42.576 20:21:54 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:12:42.576 20:21:54 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:12:42.576 20:21:54 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:42.576 20:21:54 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:42.576 20:21:54 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:12:42.576 20:21:54 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:12:42.576 20:21:54 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:12:42.576 259: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:42.576 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:12:42.576 altname enp218s0f1np1 00:12:42.576 altname ens818f1np1 00:12:42.576 inet 192.168.100.9/24 scope global mlx_0_1 00:12:42.576 valid_lft forever preferred_lft forever 00:12:42.576 20:21:54 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # return 0 00:12:42.576 20:21:54 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:42.576 20:21:54 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:12:42.576 20:21:54 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:12:42.576 20:21:54 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:12:42.576 20:21:54 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@86 -- # get_rdma_if_list 00:12:42.576 20:21:54 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:42.576 20:21:54 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:12:42.576 20:21:54 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:12:42.576 20:21:54 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:42.576 20:21:54 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:12:42.576 20:21:54 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:42.576 20:21:54 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:42.576 20:21:54 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:42.576 20:21:54 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@104 -- # echo mlx_0_0 00:12:42.576 20:21:54 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@105 -- # continue 2 00:12:42.576 20:21:54 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:42.576 20:21:54 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:42.576 20:21:54 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:42.576 20:21:54 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:42.576 20:21:54 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:42.576 20:21:54 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@104 -- # echo mlx_0_1 00:12:42.576 20:21:54 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@105 -- # continue 2 00:12:42.576 20:21:54 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:12:42.576 20:21:54 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:12:42.576 20:21:54 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:12:42.576 20:21:54 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:42.576 20:21:54 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:12:42.576 20:21:54 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:42.576 20:21:54 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:12:42.576 20:21:54 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:12:42.576 20:21:54 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:12:42.576 20:21:54 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:12:42.576 20:21:54 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:42.576 20:21:54 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:42.576 20:21:54 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:12:42.576 192.168.100.9' 00:12:42.576 20:21:54 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:12:42.576 192.168.100.9' 00:12:42.576 20:21:54 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@457 -- # head -n 1 00:12:42.576 20:21:54 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:12:42.576 20:21:54 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@458 -- # tail -n +2 00:12:42.576 20:21:54 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:12:42.576 192.168.100.9' 00:12:42.576 20:21:54 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@458 -- # head -n 1 00:12:42.576 20:21:54 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:12:42.576 20:21:54 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:12:42.576 20:21:54 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:12:42.576 20:21:54 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:12:42.576 20:21:54 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:12:42.576 20:21:54 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:12:42.576 20:21:54 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:12:42.576 20:21:54 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:42.576 20:21:54 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@720 -- # xtrace_disable 00:12:42.576 20:21:54 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:42.576 20:21:55 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=3004564 00:12:42.576 20:21:55 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 3004564 00:12:42.576 20:21:55 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:12:42.576 20:21:55 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@827 -- # '[' -z 3004564 ']' 00:12:42.576 20:21:55 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:42.576 20:21:55 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:42.576 20:21:55 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:42.576 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:42.576 20:21:55 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:42.576 20:21:55 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:42.576 [2024-05-16 20:21:55.044664] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:12:42.576 [2024-05-16 20:21:55.044705] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:42.577 EAL: No free 2048 kB hugepages reported on node 1 00:12:42.577 [2024-05-16 20:21:55.104480] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:42.577 [2024-05-16 20:21:55.186226] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:42.577 [2024-05-16 20:21:55.186262] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:42.577 [2024-05-16 20:21:55.186270] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:42.577 [2024-05-16 20:21:55.186275] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:42.577 [2024-05-16 20:21:55.186281] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:42.577 [2024-05-16 20:21:55.186315] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:42.577 [2024-05-16 20:21:55.186413] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:42.577 [2024-05-16 20:21:55.186501] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:42.577 [2024-05-16 20:21:55.186503] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:43.145 20:21:55 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:43.145 20:21:55 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@860 -- # return 0 00:12:43.145 20:21:55 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:43.145 20:21:55 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:43.145 20:21:55 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:43.145 20:21:55 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:43.145 20:21:55 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:12:43.145 20:21:55 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:43.145 20:21:55 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:43.145 20:21:55 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:43.145 20:21:55 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:12:43.145 20:21:55 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:43.145 20:21:55 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:43.145 20:21:55 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:43.145 20:21:55 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:12:43.145 20:21:55 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:43.145 20:21:55 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:43.145 [2024-05-16 20:21:55.981868] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x150a9d0/0x150eec0) succeed. 00:12:43.145 [2024-05-16 20:21:55.991808] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x150c010/0x1550550) succeed. 00:12:43.145 20:21:56 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:43.145 20:21:56 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:43.145 20:21:56 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:43.145 20:21:56 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:43.405 Malloc0 00:12:43.405 20:21:56 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:43.405 20:21:56 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:43.405 20:21:56 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:43.405 20:21:56 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:43.405 20:21:56 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:43.405 20:21:56 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:43.405 20:21:56 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:43.405 20:21:56 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:43.405 20:21:56 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:43.405 20:21:56 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:12:43.405 20:21:56 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:43.405 20:21:56 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:43.405 [2024-05-16 20:21:56.162542] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:12:43.405 [2024-05-16 20:21:56.162927] rdma.c:3032:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:12:43.405 20:21:56 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:43.405 20:21:56 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=3004815 00:12:43.405 20:21:56 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:12:43.405 20:21:56 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:12:43.405 20:21:56 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=3004817 00:12:43.405 20:21:56 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:12:43.405 20:21:56 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:12:43.405 20:21:56 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:12:43.405 20:21:56 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:12:43.405 { 00:12:43.406 "params": { 00:12:43.406 "name": "Nvme$subsystem", 00:12:43.406 "trtype": "$TEST_TRANSPORT", 00:12:43.406 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:43.406 "adrfam": "ipv4", 00:12:43.406 "trsvcid": "$NVMF_PORT", 00:12:43.406 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:43.406 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:43.406 "hdgst": ${hdgst:-false}, 00:12:43.406 "ddgst": ${ddgst:-false} 00:12:43.406 }, 00:12:43.406 "method": "bdev_nvme_attach_controller" 00:12:43.406 } 00:12:43.406 EOF 00:12:43.406 )") 00:12:43.406 20:21:56 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:12:43.406 20:21:56 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=3004819 00:12:43.406 20:21:56 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:12:43.406 20:21:56 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:12:43.406 20:21:56 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:12:43.406 20:21:56 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:12:43.406 20:21:56 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:12:43.406 { 00:12:43.406 "params": { 00:12:43.406 "name": "Nvme$subsystem", 00:12:43.406 "trtype": "$TEST_TRANSPORT", 00:12:43.406 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:43.406 "adrfam": "ipv4", 00:12:43.406 "trsvcid": "$NVMF_PORT", 00:12:43.406 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:43.406 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:43.406 "hdgst": ${hdgst:-false}, 00:12:43.406 "ddgst": ${ddgst:-false} 00:12:43.406 }, 00:12:43.406 "method": "bdev_nvme_attach_controller" 00:12:43.406 } 00:12:43.406 EOF 00:12:43.406 )") 00:12:43.406 20:21:56 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:12:43.406 20:21:56 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:12:43.406 20:21:56 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=3004822 00:12:43.406 20:21:56 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:12:43.406 20:21:56 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:12:43.406 20:21:56 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:12:43.406 20:21:56 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:12:43.406 20:21:56 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:12:43.406 20:21:56 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:12:43.406 { 00:12:43.406 "params": { 00:12:43.406 "name": "Nvme$subsystem", 00:12:43.406 "trtype": "$TEST_TRANSPORT", 00:12:43.406 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:43.406 "adrfam": "ipv4", 00:12:43.406 "trsvcid": "$NVMF_PORT", 00:12:43.406 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:43.406 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:43.406 "hdgst": ${hdgst:-false}, 00:12:43.406 "ddgst": ${ddgst:-false} 00:12:43.406 }, 00:12:43.406 "method": "bdev_nvme_attach_controller" 00:12:43.406 } 00:12:43.406 EOF 00:12:43.406 )") 00:12:43.406 20:21:56 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:12:43.406 20:21:56 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:12:43.406 20:21:56 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:12:43.406 20:21:56 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:12:43.406 20:21:56 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:12:43.406 20:21:56 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:12:43.406 20:21:56 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:12:43.406 { 00:12:43.406 "params": { 00:12:43.406 "name": "Nvme$subsystem", 00:12:43.406 "trtype": "$TEST_TRANSPORT", 00:12:43.406 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:43.406 "adrfam": "ipv4", 00:12:43.406 "trsvcid": "$NVMF_PORT", 00:12:43.406 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:43.406 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:43.406 "hdgst": ${hdgst:-false}, 00:12:43.406 "ddgst": ${ddgst:-false} 00:12:43.406 }, 00:12:43.406 "method": "bdev_nvme_attach_controller" 00:12:43.406 } 00:12:43.406 EOF 00:12:43.406 )") 00:12:43.406 20:21:56 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:12:43.406 20:21:56 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 3004815 00:12:43.406 20:21:56 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:12:43.406 20:21:56 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:12:43.406 20:21:56 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:12:43.406 20:21:56 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:12:43.406 20:21:56 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:12:43.406 20:21:56 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:12:43.406 "params": { 00:12:43.406 "name": "Nvme1", 00:12:43.406 "trtype": "rdma", 00:12:43.406 "traddr": "192.168.100.8", 00:12:43.406 "adrfam": "ipv4", 00:12:43.406 "trsvcid": "4420", 00:12:43.406 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:43.406 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:43.406 "hdgst": false, 00:12:43.406 "ddgst": false 00:12:43.406 }, 00:12:43.406 "method": "bdev_nvme_attach_controller" 00:12:43.406 }' 00:12:43.406 20:21:56 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:12:43.406 20:21:56 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:12:43.406 20:21:56 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:12:43.406 "params": { 00:12:43.406 "name": "Nvme1", 00:12:43.406 "trtype": "rdma", 00:12:43.406 "traddr": "192.168.100.8", 00:12:43.406 "adrfam": "ipv4", 00:12:43.406 "trsvcid": "4420", 00:12:43.406 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:43.406 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:43.406 "hdgst": false, 00:12:43.406 "ddgst": false 00:12:43.406 }, 00:12:43.406 "method": "bdev_nvme_attach_controller" 00:12:43.406 }' 00:12:43.406 20:21:56 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:12:43.406 20:21:56 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:12:43.406 "params": { 00:12:43.406 "name": "Nvme1", 00:12:43.406 "trtype": "rdma", 00:12:43.406 "traddr": "192.168.100.8", 00:12:43.406 "adrfam": "ipv4", 00:12:43.406 "trsvcid": "4420", 00:12:43.406 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:43.406 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:43.406 "hdgst": false, 00:12:43.406 "ddgst": false 00:12:43.406 }, 00:12:43.406 "method": "bdev_nvme_attach_controller" 00:12:43.406 }' 00:12:43.406 20:21:56 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:12:43.406 20:21:56 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:12:43.406 "params": { 00:12:43.406 "name": "Nvme1", 00:12:43.406 "trtype": "rdma", 00:12:43.406 "traddr": "192.168.100.8", 00:12:43.406 "adrfam": "ipv4", 00:12:43.406 "trsvcid": "4420", 00:12:43.406 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:43.406 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:43.406 "hdgst": false, 00:12:43.406 "ddgst": false 00:12:43.406 }, 00:12:43.406 "method": "bdev_nvme_attach_controller" 00:12:43.406 }' 00:12:43.406 [2024-05-16 20:21:56.210941] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:12:43.406 [2024-05-16 20:21:56.210994] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:12:43.406 [2024-05-16 20:21:56.213751] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:12:43.406 [2024-05-16 20:21:56.213753] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:12:43.406 [2024-05-16 20:21:56.213792] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-05-16 20:21:56.213792] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:12:43.406 .cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:12:43.406 [2024-05-16 20:21:56.213799] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:12:43.406 [2024-05-16 20:21:56.213842] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:12:43.406 EAL: No free 2048 kB hugepages reported on node 1 00:12:43.406 EAL: No free 2048 kB hugepages reported on node 1 00:12:43.406 [2024-05-16 20:21:56.396362] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:43.666 EAL: No free 2048 kB hugepages reported on node 1 00:12:43.666 [2024-05-16 20:21:56.470926] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:12:43.666 [2024-05-16 20:21:56.489999] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:43.666 EAL: No free 2048 kB hugepages reported on node 1 00:12:43.666 [2024-05-16 20:21:56.567723] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:12:43.666 [2024-05-16 20:21:56.591928] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:43.666 [2024-05-16 20:21:56.652453] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:43.925 [2024-05-16 20:21:56.674426] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:12:43.925 [2024-05-16 20:21:56.729596] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:12:43.925 Running I/O for 1 seconds... 00:12:43.925 Running I/O for 1 seconds... 00:12:43.925 Running I/O for 1 seconds... 00:12:43.925 Running I/O for 1 seconds... 00:12:44.860 00:12:44.860 Latency(us) 00:12:44.860 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:44.860 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:12:44.860 Nvme1n1 : 1.01 18010.33 70.35 0.00 0.00 7084.19 4275.44 13544.11 00:12:44.860 =================================================================================================================== 00:12:44.860 Total : 18010.33 70.35 0.00 0.00 7084.19 4275.44 13544.11 00:12:44.860 00:12:44.860 Latency(us) 00:12:44.860 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:44.861 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:12:44.861 Nvme1n1 : 1.00 254948.60 995.89 0.00 0.00 500.16 197.00 1981.68 00:12:44.861 =================================================================================================================== 00:12:44.861 Total : 254948.60 995.89 0.00 0.00 500.16 197.00 1981.68 00:12:44.861 00:12:44.861 Latency(us) 00:12:44.861 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:44.861 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:12:44.861 Nvme1n1 : 1.00 17229.13 67.30 0.00 0.00 7408.25 4712.35 17226.61 00:12:44.861 =================================================================================================================== 00:12:44.861 Total : 17229.13 67.30 0.00 0.00 7408.25 4712.35 17226.61 00:12:45.119 00:12:45.119 Latency(us) 00:12:45.119 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:45.119 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:12:45.119 Nvme1n1 : 1.00 15061.43 58.83 0.00 0.00 8477.27 3885.35 20222.54 00:12:45.119 =================================================================================================================== 00:12:45.120 Total : 15061.43 58.83 0.00 0.00 8477.27 3885.35 20222.54 00:12:45.379 20:21:58 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 3004817 00:12:45.379 20:21:58 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 3004819 00:12:45.379 20:21:58 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 3004822 00:12:45.379 20:21:58 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:45.379 20:21:58 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:45.379 20:21:58 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:45.379 20:21:58 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:45.379 20:21:58 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:12:45.379 20:21:58 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:12:45.379 20:21:58 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:45.379 20:21:58 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:12:45.379 20:21:58 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:12:45.379 20:21:58 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:12:45.379 20:21:58 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:12:45.379 20:21:58 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:45.379 20:21:58 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:12:45.379 rmmod nvme_rdma 00:12:45.379 rmmod nvme_fabrics 00:12:45.379 20:21:58 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:45.379 20:21:58 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:12:45.379 20:21:58 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:12:45.379 20:21:58 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 3004564 ']' 00:12:45.379 20:21:58 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 3004564 00:12:45.379 20:21:58 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@946 -- # '[' -z 3004564 ']' 00:12:45.379 20:21:58 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@950 -- # kill -0 3004564 00:12:45.379 20:21:58 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@951 -- # uname 00:12:45.379 20:21:58 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:45.379 20:21:58 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3004564 00:12:45.379 20:21:58 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:12:45.379 20:21:58 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:12:45.379 20:21:58 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3004564' 00:12:45.379 killing process with pid 3004564 00:12:45.379 20:21:58 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@965 -- # kill 3004564 00:12:45.379 [2024-05-16 20:21:58.265287] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:12:45.379 20:21:58 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@970 -- # wait 3004564 00:12:45.379 [2024-05-16 20:21:58.344048] rdma.c:2885:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:12:45.639 20:21:58 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:45.639 20:21:58 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:12:45.639 00:12:45.639 real 0m9.658s 00:12:45.639 user 0m20.678s 00:12:45.639 sys 0m5.757s 00:12:45.639 20:21:58 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:45.639 20:21:58 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:45.639 ************************************ 00:12:45.639 END TEST nvmf_bdev_io_wait 00:12:45.639 ************************************ 00:12:45.639 20:21:58 nvmf_rdma -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=rdma 00:12:45.639 20:21:58 nvmf_rdma -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:12:45.639 20:21:58 nvmf_rdma -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:45.639 20:21:58 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:12:45.639 ************************************ 00:12:45.639 START TEST nvmf_queue_depth 00:12:45.639 ************************************ 00:12:45.639 20:21:58 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=rdma 00:12:45.898 * Looking for test storage... 00:12:45.898 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:12:45.898 20:21:58 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:12:45.898 20:21:58 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:12:45.898 20:21:58 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:45.898 20:21:58 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:45.898 20:21:58 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:45.898 20:21:58 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:45.898 20:21:58 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:45.898 20:21:58 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:45.898 20:21:58 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:45.898 20:21:58 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:45.898 20:21:58 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:45.898 20:21:58 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:45.898 20:21:58 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:12:45.898 20:21:58 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:12:45.898 20:21:58 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:45.898 20:21:58 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:45.898 20:21:58 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:45.898 20:21:58 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:45.898 20:21:58 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:12:45.898 20:21:58 nvmf_rdma.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:45.898 20:21:58 nvmf_rdma.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:45.898 20:21:58 nvmf_rdma.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:45.898 20:21:58 nvmf_rdma.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:45.898 20:21:58 nvmf_rdma.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:45.898 20:21:58 nvmf_rdma.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:45.898 20:21:58 nvmf_rdma.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:12:45.898 20:21:58 nvmf_rdma.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:45.898 20:21:58 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:12:45.898 20:21:58 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:45.898 20:21:58 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:45.898 20:21:58 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:45.898 20:21:58 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:45.898 20:21:58 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:45.899 20:21:58 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:45.899 20:21:58 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:45.899 20:21:58 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:45.899 20:21:58 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:12:45.899 20:21:58 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:12:45.899 20:21:58 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:12:45.899 20:21:58 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:12:45.899 20:21:58 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:12:45.899 20:21:58 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:45.899 20:21:58 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:45.899 20:21:58 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:45.899 20:21:58 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:45.899 20:21:58 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:45.899 20:21:58 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:45.899 20:21:58 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:45.899 20:21:58 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:45.899 20:21:58 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:45.899 20:21:58 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@285 -- # xtrace_disable 00:12:45.899 20:21:58 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:51.168 20:22:04 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:51.168 20:22:04 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@291 -- # pci_devs=() 00:12:51.168 20:22:04 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:51.168 20:22:04 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:51.168 20:22:04 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:51.168 20:22:04 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:51.168 20:22:04 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:51.168 20:22:04 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@295 -- # net_devs=() 00:12:51.168 20:22:04 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:51.168 20:22:04 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@296 -- # e810=() 00:12:51.168 20:22:04 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@296 -- # local -ga e810 00:12:51.168 20:22:04 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@297 -- # x722=() 00:12:51.168 20:22:04 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@297 -- # local -ga x722 00:12:51.168 20:22:04 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@298 -- # mlx=() 00:12:51.168 20:22:04 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@298 -- # local -ga mlx 00:12:51.168 20:22:04 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:51.168 20:22:04 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:51.168 20:22:04 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:51.168 20:22:04 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:51.168 20:22:04 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:51.168 20:22:04 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:51.168 20:22:04 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:51.168 20:22:04 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:51.168 20:22:04 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:51.168 20:22:04 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:51.168 20:22:04 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:51.168 20:22:04 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:51.168 20:22:04 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:12:51.168 20:22:04 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:12:51.168 20:22:04 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:12:51.168 20:22:04 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:12:51.168 20:22:04 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:12:51.168 20:22:04 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:51.168 20:22:04 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:51.168 20:22:04 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:12:51.168 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:12:51.168 20:22:04 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:12:51.168 20:22:04 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:12:51.168 20:22:04 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:12:51.168 20:22:04 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:12:51.168 20:22:04 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:12:51.168 20:22:04 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:12:51.168 20:22:04 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:51.168 20:22:04 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:12:51.168 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:12:51.168 20:22:04 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:12:51.168 20:22:04 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:12:51.168 20:22:04 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:12:51.168 20:22:04 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:12:51.168 20:22:04 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:12:51.168 20:22:04 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:12:51.168 20:22:04 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:51.168 20:22:04 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:12:51.168 20:22:04 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:51.168 20:22:04 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:51.168 20:22:04 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:12:51.168 20:22:04 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:51.168 20:22:04 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:51.168 20:22:04 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:12:51.168 Found net devices under 0000:da:00.0: mlx_0_0 00:12:51.168 20:22:04 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:51.168 20:22:04 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:51.168 20:22:04 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:51.168 20:22:04 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:12:51.169 20:22:04 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:51.169 20:22:04 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:51.169 20:22:04 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:12:51.169 Found net devices under 0000:da:00.1: mlx_0_1 00:12:51.169 20:22:04 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:51.169 20:22:04 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:51.169 20:22:04 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@414 -- # is_hw=yes 00:12:51.169 20:22:04 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:51.169 20:22:04 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:12:51.169 20:22:04 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:12:51.169 20:22:04 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@420 -- # rdma_device_init 00:12:51.169 20:22:04 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:12:51.169 20:22:04 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@58 -- # uname 00:12:51.169 20:22:04 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:12:51.169 20:22:04 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@62 -- # modprobe ib_cm 00:12:51.169 20:22:04 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@63 -- # modprobe ib_core 00:12:51.169 20:22:04 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@64 -- # modprobe ib_umad 00:12:51.169 20:22:04 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:12:51.169 20:22:04 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@66 -- # modprobe iw_cm 00:12:51.169 20:22:04 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:12:51.169 20:22:04 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:12:51.169 20:22:04 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@502 -- # allocate_nic_ips 00:12:51.169 20:22:04 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:12:51.169 20:22:04 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@73 -- # get_rdma_if_list 00:12:51.169 20:22:04 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:51.169 20:22:04 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:12:51.169 20:22:04 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:12:51.169 20:22:04 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:51.169 20:22:04 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:12:51.169 20:22:04 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:51.169 20:22:04 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:51.169 20:22:04 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:51.169 20:22:04 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@104 -- # echo mlx_0_0 00:12:51.169 20:22:04 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@105 -- # continue 2 00:12:51.169 20:22:04 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:51.169 20:22:04 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:51.169 20:22:04 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:51.169 20:22:04 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:51.169 20:22:04 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:51.169 20:22:04 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@104 -- # echo mlx_0_1 00:12:51.169 20:22:04 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@105 -- # continue 2 00:12:51.169 20:22:04 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:12:51.169 20:22:04 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:12:51.169 20:22:04 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:12:51.169 20:22:04 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:12:51.169 20:22:04 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:51.169 20:22:04 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:51.169 20:22:04 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:12:51.169 20:22:04 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:12:51.169 20:22:04 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:12:51.169 258: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:51.169 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:12:51.169 altname enp218s0f0np0 00:12:51.169 altname ens818f0np0 00:12:51.169 inet 192.168.100.8/24 scope global mlx_0_0 00:12:51.169 valid_lft forever preferred_lft forever 00:12:51.169 20:22:04 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:12:51.169 20:22:04 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:12:51.169 20:22:04 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:12:51.169 20:22:04 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:12:51.169 20:22:04 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:51.169 20:22:04 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:51.169 20:22:04 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:12:51.169 20:22:04 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:12:51.169 20:22:04 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:12:51.169 259: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:51.169 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:12:51.169 altname enp218s0f1np1 00:12:51.169 altname ens818f1np1 00:12:51.169 inet 192.168.100.9/24 scope global mlx_0_1 00:12:51.169 valid_lft forever preferred_lft forever 00:12:51.169 20:22:04 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@422 -- # return 0 00:12:51.169 20:22:04 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:51.169 20:22:04 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:12:51.169 20:22:04 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:12:51.169 20:22:04 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:12:51.446 20:22:04 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@86 -- # get_rdma_if_list 00:12:51.446 20:22:04 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:51.446 20:22:04 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:12:51.446 20:22:04 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:12:51.446 20:22:04 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:51.446 20:22:04 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:12:51.446 20:22:04 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:51.446 20:22:04 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:51.446 20:22:04 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:51.446 20:22:04 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@104 -- # echo mlx_0_0 00:12:51.446 20:22:04 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@105 -- # continue 2 00:12:51.446 20:22:04 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:51.446 20:22:04 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:51.446 20:22:04 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:51.446 20:22:04 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:51.446 20:22:04 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:51.446 20:22:04 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@104 -- # echo mlx_0_1 00:12:51.446 20:22:04 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@105 -- # continue 2 00:12:51.446 20:22:04 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:12:51.446 20:22:04 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:12:51.446 20:22:04 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:12:51.446 20:22:04 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:51.446 20:22:04 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:12:51.446 20:22:04 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:51.446 20:22:04 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:12:51.446 20:22:04 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:12:51.446 20:22:04 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:12:51.446 20:22:04 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:12:51.446 20:22:04 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:51.446 20:22:04 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:51.446 20:22:04 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:12:51.446 192.168.100.9' 00:12:51.446 20:22:04 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:12:51.446 192.168.100.9' 00:12:51.446 20:22:04 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@457 -- # head -n 1 00:12:51.446 20:22:04 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:12:51.446 20:22:04 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@458 -- # head -n 1 00:12:51.446 20:22:04 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:12:51.446 192.168.100.9' 00:12:51.446 20:22:04 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@458 -- # tail -n +2 00:12:51.446 20:22:04 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:12:51.446 20:22:04 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:12:51.446 20:22:04 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:12:51.446 20:22:04 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:12:51.446 20:22:04 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:12:51.446 20:22:04 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:12:51.446 20:22:04 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:12:51.446 20:22:04 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:51.446 20:22:04 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@720 -- # xtrace_disable 00:12:51.446 20:22:04 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:51.446 20:22:04 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=3008645 00:12:51.446 20:22:04 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 3008645 00:12:51.446 20:22:04 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@827 -- # '[' -z 3008645 ']' 00:12:51.446 20:22:04 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:51.446 20:22:04 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:51.446 20:22:04 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:12:51.446 20:22:04 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:51.446 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:51.446 20:22:04 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:51.446 20:22:04 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:51.446 [2024-05-16 20:22:04.284285] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:12:51.446 [2024-05-16 20:22:04.284326] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:51.446 EAL: No free 2048 kB hugepages reported on node 1 00:12:51.446 [2024-05-16 20:22:04.343646] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:51.446 [2024-05-16 20:22:04.420953] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:51.446 [2024-05-16 20:22:04.420989] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:51.446 [2024-05-16 20:22:04.420996] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:51.446 [2024-05-16 20:22:04.421003] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:51.446 [2024-05-16 20:22:04.421008] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:51.446 [2024-05-16 20:22:04.421033] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:52.432 20:22:05 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:52.432 20:22:05 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@860 -- # return 0 00:12:52.432 20:22:05 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:52.432 20:22:05 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:52.432 20:22:05 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:52.432 20:22:05 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:52.432 20:22:05 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:12:52.432 20:22:05 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:52.432 20:22:05 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:52.432 [2024-05-16 20:22:05.139526] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x15b8ae0/0x15bcfd0) succeed. 00:12:52.432 [2024-05-16 20:22:05.148485] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x15b9fe0/0x15fe660) succeed. 00:12:52.432 20:22:05 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:52.432 20:22:05 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:52.432 20:22:05 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:52.432 20:22:05 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:52.432 Malloc0 00:12:52.432 20:22:05 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:52.432 20:22:05 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:52.432 20:22:05 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:52.432 20:22:05 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:52.432 20:22:05 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:52.432 20:22:05 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:52.432 20:22:05 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:52.432 20:22:05 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:52.432 20:22:05 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:52.432 20:22:05 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:12:52.432 20:22:05 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:52.432 20:22:05 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:52.432 [2024-05-16 20:22:05.237695] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:12:52.432 [2024-05-16 20:22:05.238039] rdma.c:3032:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:12:52.432 20:22:05 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:52.432 20:22:05 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=3008891 00:12:52.432 20:22:05 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:12:52.432 20:22:05 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 3008891 /var/tmp/bdevperf.sock 00:12:52.432 20:22:05 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@827 -- # '[' -z 3008891 ']' 00:12:52.432 20:22:05 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:52.432 20:22:05 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:52.432 20:22:05 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:12:52.432 20:22:05 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:52.432 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:52.432 20:22:05 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:52.432 20:22:05 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:52.432 [2024-05-16 20:22:05.283220] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:12:52.432 [2024-05-16 20:22:05.283260] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3008891 ] 00:12:52.432 EAL: No free 2048 kB hugepages reported on node 1 00:12:52.432 [2024-05-16 20:22:05.342770] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:52.432 [2024-05-16 20:22:05.416387] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:53.370 20:22:06 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:53.370 20:22:06 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@860 -- # return 0 00:12:53.370 20:22:06 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:12:53.370 20:22:06 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:53.370 20:22:06 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:53.370 NVMe0n1 00:12:53.370 20:22:06 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:53.370 20:22:06 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:12:53.370 Running I/O for 10 seconds... 00:13:03.351 00:13:03.351 Latency(us) 00:13:03.351 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:03.351 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:13:03.351 Verification LBA range: start 0x0 length 0x4000 00:13:03.351 NVMe0n1 : 10.05 17321.59 67.66 0.00 0.00 58970.07 22719.15 43690.67 00:13:03.351 =================================================================================================================== 00:13:03.351 Total : 17321.59 67.66 0.00 0.00 58970.07 22719.15 43690.67 00:13:03.351 0 00:13:03.351 20:22:16 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 3008891 00:13:03.351 20:22:16 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@946 -- # '[' -z 3008891 ']' 00:13:03.351 20:22:16 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@950 -- # kill -0 3008891 00:13:03.351 20:22:16 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@951 -- # uname 00:13:03.351 20:22:16 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:03.611 20:22:16 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3008891 00:13:03.611 20:22:16 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:13:03.611 20:22:16 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:13:03.611 20:22:16 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3008891' 00:13:03.611 killing process with pid 3008891 00:13:03.611 20:22:16 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@965 -- # kill 3008891 00:13:03.611 Received shutdown signal, test time was about 10.000000 seconds 00:13:03.611 00:13:03.611 Latency(us) 00:13:03.611 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:03.611 =================================================================================================================== 00:13:03.611 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:03.611 20:22:16 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@970 -- # wait 3008891 00:13:03.611 20:22:16 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:13:03.611 20:22:16 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:13:03.611 20:22:16 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:03.611 20:22:16 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:13:03.611 20:22:16 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:13:03.611 20:22:16 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:13:03.611 20:22:16 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:13:03.611 20:22:16 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:03.611 20:22:16 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:13:03.611 rmmod nvme_rdma 00:13:03.611 rmmod nvme_fabrics 00:13:03.871 20:22:16 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:03.871 20:22:16 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:13:03.871 20:22:16 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:13:03.871 20:22:16 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 3008645 ']' 00:13:03.871 20:22:16 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 3008645 00:13:03.871 20:22:16 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@946 -- # '[' -z 3008645 ']' 00:13:03.871 20:22:16 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@950 -- # kill -0 3008645 00:13:03.871 20:22:16 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@951 -- # uname 00:13:03.871 20:22:16 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:03.871 20:22:16 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3008645 00:13:03.871 20:22:16 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:13:03.871 20:22:16 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:13:03.871 20:22:16 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3008645' 00:13:03.871 killing process with pid 3008645 00:13:03.871 20:22:16 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@965 -- # kill 3008645 00:13:03.871 [2024-05-16 20:22:16.671735] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:13:03.871 20:22:16 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@970 -- # wait 3008645 00:13:03.871 [2024-05-16 20:22:16.715306] rdma.c:2885:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:13:04.130 20:22:16 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:04.130 20:22:16 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:13:04.130 00:13:04.130 real 0m18.319s 00:13:04.130 user 0m25.674s 00:13:04.130 sys 0m4.793s 00:13:04.130 20:22:16 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:04.130 20:22:16 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:04.130 ************************************ 00:13:04.130 END TEST nvmf_queue_depth 00:13:04.130 ************************************ 00:13:04.130 20:22:16 nvmf_rdma -- nvmf/nvmf.sh@52 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=rdma 00:13:04.130 20:22:16 nvmf_rdma -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:13:04.130 20:22:16 nvmf_rdma -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:04.130 20:22:16 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:13:04.130 ************************************ 00:13:04.130 START TEST nvmf_target_multipath 00:13:04.131 ************************************ 00:13:04.131 20:22:16 nvmf_rdma.nvmf_target_multipath -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=rdma 00:13:04.131 * Looking for test storage... 00:13:04.131 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:13:04.131 20:22:17 nvmf_rdma.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:13:04.131 20:22:17 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:13:04.131 20:22:17 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:04.131 20:22:17 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:04.131 20:22:17 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:04.131 20:22:17 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:04.131 20:22:17 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:04.131 20:22:17 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:04.131 20:22:17 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:04.131 20:22:17 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:04.131 20:22:17 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:04.131 20:22:17 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:04.131 20:22:17 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:13:04.131 20:22:17 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:13:04.131 20:22:17 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:04.131 20:22:17 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:04.131 20:22:17 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:04.131 20:22:17 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:04.131 20:22:17 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:13:04.131 20:22:17 nvmf_rdma.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:04.131 20:22:17 nvmf_rdma.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:04.131 20:22:17 nvmf_rdma.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:04.131 20:22:17 nvmf_rdma.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:04.131 20:22:17 nvmf_rdma.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:04.131 20:22:17 nvmf_rdma.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:04.131 20:22:17 nvmf_rdma.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:13:04.131 20:22:17 nvmf_rdma.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:04.131 20:22:17 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:13:04.131 20:22:17 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:04.131 20:22:17 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:04.131 20:22:17 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:04.131 20:22:17 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:04.131 20:22:17 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:04.131 20:22:17 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:04.131 20:22:17 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:04.131 20:22:17 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:04.131 20:22:17 nvmf_rdma.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:04.131 20:22:17 nvmf_rdma.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:04.131 20:22:17 nvmf_rdma.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:13:04.131 20:22:17 nvmf_rdma.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:13:04.131 20:22:17 nvmf_rdma.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:13:04.131 20:22:17 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:13:04.131 20:22:17 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:04.131 20:22:17 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:04.131 20:22:17 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:04.131 20:22:17 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:04.131 20:22:17 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:04.131 20:22:17 nvmf_rdma.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:04.131 20:22:17 nvmf_rdma.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:04.131 20:22:17 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:04.131 20:22:17 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:04.131 20:22:17 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@285 -- # xtrace_disable 00:13:04.131 20:22:17 nvmf_rdma.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:13:10.702 20:22:22 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:10.702 20:22:22 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@291 -- # pci_devs=() 00:13:10.702 20:22:22 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:10.702 20:22:22 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:10.702 20:22:22 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:10.702 20:22:22 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:10.702 20:22:22 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:10.702 20:22:22 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@295 -- # net_devs=() 00:13:10.702 20:22:22 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:10.702 20:22:22 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@296 -- # e810=() 00:13:10.702 20:22:22 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@296 -- # local -ga e810 00:13:10.702 20:22:22 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@297 -- # x722=() 00:13:10.702 20:22:22 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@297 -- # local -ga x722 00:13:10.702 20:22:22 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@298 -- # mlx=() 00:13:10.702 20:22:22 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@298 -- # local -ga mlx 00:13:10.702 20:22:22 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:10.702 20:22:22 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:10.702 20:22:22 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:10.702 20:22:22 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:10.702 20:22:22 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:10.702 20:22:22 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:10.702 20:22:22 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:10.702 20:22:22 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:10.703 20:22:22 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:10.703 20:22:22 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:10.703 20:22:22 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:10.703 20:22:22 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:10.703 20:22:22 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:13:10.703 20:22:22 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:13:10.703 20:22:22 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:13:10.703 20:22:22 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:13:10.703 20:22:22 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:13:10.703 20:22:22 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:10.703 20:22:22 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:10.703 20:22:22 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:13:10.703 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:13:10.703 20:22:22 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:13:10.703 20:22:22 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:13:10.703 20:22:22 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:13:10.703 20:22:22 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:13:10.703 20:22:22 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:13:10.703 20:22:22 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:13:10.703 20:22:22 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:10.703 20:22:22 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:13:10.703 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:13:10.703 20:22:22 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:13:10.703 20:22:22 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:13:10.703 20:22:22 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:13:10.703 20:22:22 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:13:10.703 20:22:22 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:13:10.703 20:22:22 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:13:10.703 20:22:22 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:10.703 20:22:22 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:13:10.703 20:22:22 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:10.703 20:22:22 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:10.703 20:22:22 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:13:10.703 20:22:22 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:10.703 20:22:22 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:10.703 20:22:22 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:13:10.703 Found net devices under 0000:da:00.0: mlx_0_0 00:13:10.703 20:22:22 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:10.703 20:22:22 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:10.703 20:22:22 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:10.703 20:22:22 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:13:10.703 20:22:22 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:10.703 20:22:22 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:10.703 20:22:22 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:13:10.703 Found net devices under 0000:da:00.1: mlx_0_1 00:13:10.703 20:22:22 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:10.703 20:22:22 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:10.703 20:22:22 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@414 -- # is_hw=yes 00:13:10.703 20:22:22 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:10.703 20:22:22 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:13:10.703 20:22:22 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:13:10.703 20:22:22 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@420 -- # rdma_device_init 00:13:10.703 20:22:22 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:13:10.703 20:22:22 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@58 -- # uname 00:13:10.703 20:22:22 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:13:10.703 20:22:22 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@62 -- # modprobe ib_cm 00:13:10.703 20:22:22 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@63 -- # modprobe ib_core 00:13:10.703 20:22:22 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@64 -- # modprobe ib_umad 00:13:10.703 20:22:22 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:13:10.703 20:22:22 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@66 -- # modprobe iw_cm 00:13:10.703 20:22:22 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:13:10.703 20:22:22 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:13:10.703 20:22:22 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@502 -- # allocate_nic_ips 00:13:10.703 20:22:22 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:13:10.703 20:22:22 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@73 -- # get_rdma_if_list 00:13:10.703 20:22:22 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:13:10.703 20:22:22 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:13:10.703 20:22:22 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:13:10.703 20:22:22 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:13:10.703 20:22:22 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:13:10.703 20:22:22 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:10.703 20:22:22 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:10.703 20:22:22 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:13:10.703 20:22:22 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@104 -- # echo mlx_0_0 00:13:10.703 20:22:22 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@105 -- # continue 2 00:13:10.703 20:22:22 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:10.703 20:22:22 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:10.703 20:22:22 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:13:10.703 20:22:22 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:10.703 20:22:22 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:13:10.703 20:22:22 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@104 -- # echo mlx_0_1 00:13:10.703 20:22:22 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@105 -- # continue 2 00:13:10.703 20:22:22 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:13:10.703 20:22:22 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:13:10.703 20:22:22 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:13:10.703 20:22:22 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:13:10.703 20:22:22 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:10.703 20:22:22 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:10.703 20:22:22 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:13:10.703 20:22:22 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:13:10.703 20:22:22 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:13:10.703 258: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:13:10.703 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:13:10.703 altname enp218s0f0np0 00:13:10.703 altname ens818f0np0 00:13:10.703 inet 192.168.100.8/24 scope global mlx_0_0 00:13:10.703 valid_lft forever preferred_lft forever 00:13:10.703 20:22:22 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:13:10.703 20:22:22 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:13:10.703 20:22:22 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:13:10.703 20:22:22 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:13:10.703 20:22:22 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:10.703 20:22:22 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:10.703 20:22:22 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:13:10.703 20:22:22 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:13:10.703 20:22:22 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:13:10.703 259: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:13:10.703 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:13:10.703 altname enp218s0f1np1 00:13:10.703 altname ens818f1np1 00:13:10.703 inet 192.168.100.9/24 scope global mlx_0_1 00:13:10.703 valid_lft forever preferred_lft forever 00:13:10.703 20:22:22 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@422 -- # return 0 00:13:10.703 20:22:22 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:10.703 20:22:22 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:13:10.703 20:22:22 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:13:10.703 20:22:22 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:13:10.703 20:22:22 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@86 -- # get_rdma_if_list 00:13:10.703 20:22:22 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:13:10.703 20:22:22 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:13:10.703 20:22:22 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:13:10.703 20:22:22 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:13:10.703 20:22:22 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:13:10.703 20:22:22 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:10.703 20:22:22 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:10.703 20:22:22 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:13:10.703 20:22:22 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@104 -- # echo mlx_0_0 00:13:10.703 20:22:22 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@105 -- # continue 2 00:13:10.703 20:22:22 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:10.704 20:22:22 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:10.704 20:22:22 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:13:10.704 20:22:22 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:10.704 20:22:22 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:13:10.704 20:22:22 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@104 -- # echo mlx_0_1 00:13:10.704 20:22:22 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@105 -- # continue 2 00:13:10.704 20:22:22 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:13:10.704 20:22:22 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:13:10.704 20:22:22 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:13:10.704 20:22:22 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:13:10.704 20:22:22 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:10.704 20:22:22 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:10.704 20:22:22 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:13:10.704 20:22:22 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:13:10.704 20:22:22 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:13:10.704 20:22:22 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:13:10.704 20:22:22 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:10.704 20:22:22 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:10.704 20:22:22 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:13:10.704 192.168.100.9' 00:13:10.704 20:22:22 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:13:10.704 192.168.100.9' 00:13:10.704 20:22:22 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@457 -- # head -n 1 00:13:10.704 20:22:22 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:13:10.704 20:22:22 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:13:10.704 192.168.100.9' 00:13:10.704 20:22:22 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@458 -- # tail -n +2 00:13:10.704 20:22:22 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@458 -- # head -n 1 00:13:10.704 20:22:22 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:13:10.704 20:22:22 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:13:10.704 20:22:22 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:13:10.704 20:22:22 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:13:10.704 20:22:22 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:13:10.704 20:22:22 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:13:10.704 20:22:22 nvmf_rdma.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z 192.168.100.9 ']' 00:13:10.704 20:22:22 nvmf_rdma.nvmf_target_multipath -- target/multipath.sh@51 -- # '[' rdma '!=' tcp ']' 00:13:10.704 20:22:22 nvmf_rdma.nvmf_target_multipath -- target/multipath.sh@52 -- # echo 'run this test only with TCP transport for now' 00:13:10.704 run this test only with TCP transport for now 00:13:10.704 20:22:22 nvmf_rdma.nvmf_target_multipath -- target/multipath.sh@53 -- # nvmftestfini 00:13:10.704 20:22:22 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:10.704 20:22:22 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:13:10.704 20:22:22 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:13:10.704 20:22:22 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:13:10.704 20:22:22 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:13:10.704 20:22:22 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:10.704 20:22:22 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:13:10.704 rmmod nvme_rdma 00:13:10.704 rmmod nvme_fabrics 00:13:10.704 20:22:22 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:10.704 20:22:22 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:13:10.704 20:22:22 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:13:10.704 20:22:22 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:13:10.704 20:22:22 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:10.704 20:22:22 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:13:10.704 20:22:22 nvmf_rdma.nvmf_target_multipath -- target/multipath.sh@54 -- # exit 0 00:13:10.704 20:22:22 nvmf_rdma.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:13:10.704 20:22:22 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:10.704 20:22:22 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:13:10.704 20:22:22 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:13:10.704 20:22:22 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:13:10.704 20:22:22 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:13:10.704 20:22:22 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:10.704 20:22:22 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:13:10.704 20:22:22 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:10.704 20:22:22 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:13:10.704 20:22:22 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:13:10.704 20:22:22 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:13:10.704 20:22:22 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:10.704 20:22:22 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:13:10.704 00:13:10.704 real 0m5.792s 00:13:10.704 user 0m1.639s 00:13:10.704 sys 0m4.252s 00:13:10.704 20:22:22 nvmf_rdma.nvmf_target_multipath -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:10.704 20:22:22 nvmf_rdma.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:13:10.704 ************************************ 00:13:10.704 END TEST nvmf_target_multipath 00:13:10.704 ************************************ 00:13:10.704 20:22:22 nvmf_rdma -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=rdma 00:13:10.704 20:22:22 nvmf_rdma -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:13:10.704 20:22:22 nvmf_rdma -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:10.704 20:22:22 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:13:10.704 ************************************ 00:13:10.704 START TEST nvmf_zcopy 00:13:10.704 ************************************ 00:13:10.704 20:22:22 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=rdma 00:13:10.704 * Looking for test storage... 00:13:10.704 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:13:10.704 20:22:22 nvmf_rdma.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:13:10.704 20:22:22 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:13:10.704 20:22:22 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:10.704 20:22:22 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:10.704 20:22:22 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:10.704 20:22:22 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:10.704 20:22:22 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:10.704 20:22:22 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:10.704 20:22:22 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:10.704 20:22:22 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:10.704 20:22:22 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:10.704 20:22:22 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:10.704 20:22:22 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:13:10.704 20:22:22 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:13:10.704 20:22:22 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:10.704 20:22:22 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:10.704 20:22:22 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:10.704 20:22:22 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:10.704 20:22:22 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:13:10.704 20:22:22 nvmf_rdma.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:10.704 20:22:22 nvmf_rdma.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:10.704 20:22:22 nvmf_rdma.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:10.704 20:22:22 nvmf_rdma.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:10.704 20:22:22 nvmf_rdma.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:10.704 20:22:22 nvmf_rdma.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:10.704 20:22:22 nvmf_rdma.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:13:10.704 20:22:22 nvmf_rdma.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:10.704 20:22:22 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:13:10.704 20:22:22 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:10.704 20:22:22 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:10.704 20:22:22 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:10.704 20:22:22 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:10.704 20:22:22 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:10.705 20:22:22 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:10.705 20:22:22 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:10.705 20:22:22 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:10.705 20:22:22 nvmf_rdma.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:13:10.705 20:22:22 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:13:10.705 20:22:22 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:10.705 20:22:22 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:10.705 20:22:22 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:10.705 20:22:22 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:10.705 20:22:22 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:10.705 20:22:22 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:10.705 20:22:22 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:10.705 20:22:22 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:10.705 20:22:22 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:10.705 20:22:22 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@285 -- # xtrace_disable 00:13:10.705 20:22:22 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:17.276 20:22:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:17.276 20:22:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@291 -- # pci_devs=() 00:13:17.276 20:22:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:17.276 20:22:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:17.276 20:22:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:17.276 20:22:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:17.276 20:22:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:17.276 20:22:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@295 -- # net_devs=() 00:13:17.276 20:22:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:17.276 20:22:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@296 -- # e810=() 00:13:17.276 20:22:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@296 -- # local -ga e810 00:13:17.276 20:22:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@297 -- # x722=() 00:13:17.276 20:22:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@297 -- # local -ga x722 00:13:17.276 20:22:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@298 -- # mlx=() 00:13:17.276 20:22:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@298 -- # local -ga mlx 00:13:17.276 20:22:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:17.276 20:22:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:17.276 20:22:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:17.276 20:22:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:17.276 20:22:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:17.276 20:22:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:17.276 20:22:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:17.276 20:22:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:17.276 20:22:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:17.276 20:22:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:17.276 20:22:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:17.276 20:22:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:17.276 20:22:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:13:17.276 20:22:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:13:17.276 20:22:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:13:17.276 20:22:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:13:17.276 20:22:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:13:17.276 20:22:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:17.276 20:22:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:17.276 20:22:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:13:17.276 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:13:17.276 20:22:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:13:17.276 20:22:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:13:17.276 20:22:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:13:17.276 20:22:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:13:17.276 20:22:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:13:17.276 20:22:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:13:17.276 20:22:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:17.276 20:22:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:13:17.276 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:13:17.276 20:22:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:13:17.276 20:22:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:13:17.276 20:22:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:13:17.276 20:22:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:13:17.276 20:22:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:13:17.276 20:22:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:13:17.276 20:22:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:17.276 20:22:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:13:17.276 20:22:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:17.276 20:22:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:17.276 20:22:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:13:17.276 20:22:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:17.276 20:22:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:17.277 20:22:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:13:17.277 Found net devices under 0000:da:00.0: mlx_0_0 00:13:17.277 20:22:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:17.277 20:22:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:17.277 20:22:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:17.277 20:22:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:13:17.277 20:22:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:17.277 20:22:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:17.277 20:22:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:13:17.277 Found net devices under 0000:da:00.1: mlx_0_1 00:13:17.277 20:22:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:17.277 20:22:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:17.277 20:22:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@414 -- # is_hw=yes 00:13:17.277 20:22:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:17.277 20:22:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:13:17.277 20:22:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:13:17.277 20:22:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@420 -- # rdma_device_init 00:13:17.277 20:22:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:13:17.277 20:22:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@58 -- # uname 00:13:17.277 20:22:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:13:17.277 20:22:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@62 -- # modprobe ib_cm 00:13:17.277 20:22:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@63 -- # modprobe ib_core 00:13:17.277 20:22:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@64 -- # modprobe ib_umad 00:13:17.277 20:22:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:13:17.277 20:22:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@66 -- # modprobe iw_cm 00:13:17.277 20:22:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:13:17.277 20:22:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:13:17.277 20:22:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@502 -- # allocate_nic_ips 00:13:17.277 20:22:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:13:17.277 20:22:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@73 -- # get_rdma_if_list 00:13:17.277 20:22:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:13:17.277 20:22:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:13:17.277 20:22:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:13:17.277 20:22:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:13:17.277 20:22:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:13:17.277 20:22:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:17.277 20:22:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:17.277 20:22:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:13:17.277 20:22:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@104 -- # echo mlx_0_0 00:13:17.277 20:22:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@105 -- # continue 2 00:13:17.277 20:22:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:17.277 20:22:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:17.277 20:22:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:13:17.277 20:22:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:17.277 20:22:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:13:17.277 20:22:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@104 -- # echo mlx_0_1 00:13:17.277 20:22:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@105 -- # continue 2 00:13:17.277 20:22:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:13:17.277 20:22:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:13:17.277 20:22:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:13:17.277 20:22:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:13:17.277 20:22:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:17.277 20:22:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:17.277 20:22:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:13:17.277 20:22:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:13:17.277 20:22:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:13:17.277 258: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:13:17.277 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:13:17.277 altname enp218s0f0np0 00:13:17.277 altname ens818f0np0 00:13:17.277 inet 192.168.100.8/24 scope global mlx_0_0 00:13:17.277 valid_lft forever preferred_lft forever 00:13:17.277 20:22:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:13:17.277 20:22:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:13:17.277 20:22:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:13:17.277 20:22:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:13:17.277 20:22:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:17.277 20:22:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:17.277 20:22:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:13:17.277 20:22:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:13:17.277 20:22:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:13:17.277 259: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:13:17.277 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:13:17.277 altname enp218s0f1np1 00:13:17.277 altname ens818f1np1 00:13:17.277 inet 192.168.100.9/24 scope global mlx_0_1 00:13:17.277 valid_lft forever preferred_lft forever 00:13:17.277 20:22:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@422 -- # return 0 00:13:17.277 20:22:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:17.277 20:22:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:13:17.277 20:22:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:13:17.277 20:22:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:13:17.277 20:22:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@86 -- # get_rdma_if_list 00:13:17.277 20:22:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:13:17.277 20:22:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:13:17.277 20:22:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:13:17.277 20:22:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:13:17.277 20:22:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:13:17.277 20:22:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:17.277 20:22:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:17.277 20:22:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:13:17.277 20:22:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@104 -- # echo mlx_0_0 00:13:17.277 20:22:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@105 -- # continue 2 00:13:17.277 20:22:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:17.277 20:22:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:17.277 20:22:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:13:17.277 20:22:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:17.277 20:22:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:13:17.277 20:22:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@104 -- # echo mlx_0_1 00:13:17.277 20:22:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@105 -- # continue 2 00:13:17.277 20:22:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:13:17.277 20:22:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:13:17.277 20:22:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:13:17.277 20:22:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:13:17.277 20:22:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:17.277 20:22:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:17.277 20:22:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:13:17.277 20:22:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:13:17.277 20:22:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:13:17.277 20:22:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:13:17.277 20:22:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:17.277 20:22:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:17.277 20:22:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:13:17.277 192.168.100.9' 00:13:17.277 20:22:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:13:17.277 192.168.100.9' 00:13:17.277 20:22:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@457 -- # head -n 1 00:13:17.277 20:22:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:13:17.277 20:22:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:13:17.277 192.168.100.9' 00:13:17.277 20:22:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@458 -- # tail -n +2 00:13:17.277 20:22:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@458 -- # head -n 1 00:13:17.277 20:22:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:13:17.277 20:22:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:13:17.277 20:22:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:13:17.277 20:22:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:13:17.277 20:22:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:13:17.277 20:22:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:13:17.277 20:22:29 nvmf_rdma.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:13:17.277 20:22:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:17.277 20:22:29 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@720 -- # xtrace_disable 00:13:17.277 20:22:29 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:17.277 20:22:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=3017650 00:13:17.277 20:22:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 3017650 00:13:17.277 20:22:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:17.277 20:22:29 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@827 -- # '[' -z 3017650 ']' 00:13:17.277 20:22:29 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:17.277 20:22:29 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:17.277 20:22:29 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:17.278 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:17.278 20:22:29 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:17.278 20:22:29 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:17.278 [2024-05-16 20:22:29.505043] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:13:17.278 [2024-05-16 20:22:29.505097] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:17.278 EAL: No free 2048 kB hugepages reported on node 1 00:13:17.278 [2024-05-16 20:22:29.567570] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:17.278 [2024-05-16 20:22:29.642170] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:17.278 [2024-05-16 20:22:29.642207] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:17.278 [2024-05-16 20:22:29.642213] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:17.278 [2024-05-16 20:22:29.642219] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:17.278 [2024-05-16 20:22:29.642225] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:17.278 [2024-05-16 20:22:29.642243] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:17.657 20:22:30 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:17.657 20:22:30 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@860 -- # return 0 00:13:17.657 20:22:30 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:17.657 20:22:30 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:17.657 20:22:30 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:17.657 20:22:30 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:17.658 20:22:30 nvmf_rdma.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' rdma '!=' tcp ']' 00:13:17.658 20:22:30 nvmf_rdma.nvmf_zcopy -- target/zcopy.sh@16 -- # echo 'Unsupported transport: rdma' 00:13:17.658 Unsupported transport: rdma 00:13:17.658 20:22:30 nvmf_rdma.nvmf_zcopy -- target/zcopy.sh@17 -- # exit 0 00:13:17.658 20:22:30 nvmf_rdma.nvmf_zcopy -- target/zcopy.sh@1 -- # process_shm --id 0 00:13:17.658 20:22:30 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@804 -- # type=--id 00:13:17.658 20:22:30 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@805 -- # id=0 00:13:17.658 20:22:30 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@806 -- # '[' --id = --pid ']' 00:13:17.658 20:22:30 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@810 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:13:17.658 20:22:30 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@810 -- # shm_files=nvmf_trace.0 00:13:17.658 20:22:30 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@812 -- # [[ -z nvmf_trace.0 ]] 00:13:17.658 20:22:30 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@816 -- # for n in $shm_files 00:13:17.658 20:22:30 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@817 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:13:17.658 nvmf_trace.0 00:13:17.658 20:22:30 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@819 -- # return 0 00:13:17.658 20:22:30 nvmf_rdma.nvmf_zcopy -- target/zcopy.sh@1 -- # nvmftestfini 00:13:17.658 20:22:30 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:17.658 20:22:30 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:13:17.658 20:22:30 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:13:17.658 20:22:30 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:13:17.658 20:22:30 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:13:17.658 20:22:30 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:17.658 20:22:30 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:13:17.658 rmmod nvme_rdma 00:13:17.658 rmmod nvme_fabrics 00:13:17.658 20:22:30 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:17.658 20:22:30 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:13:17.658 20:22:30 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:13:17.658 20:22:30 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 3017650 ']' 00:13:17.658 20:22:30 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 3017650 00:13:17.658 20:22:30 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@946 -- # '[' -z 3017650 ']' 00:13:17.658 20:22:30 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@950 -- # kill -0 3017650 00:13:17.658 20:22:30 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@951 -- # uname 00:13:17.658 20:22:30 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:17.658 20:22:30 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3017650 00:13:17.658 20:22:30 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:13:17.658 20:22:30 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:13:17.658 20:22:30 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3017650' 00:13:17.658 killing process with pid 3017650 00:13:17.658 20:22:30 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@965 -- # kill 3017650 00:13:17.658 20:22:30 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@970 -- # wait 3017650 00:13:17.658 20:22:30 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:17.658 20:22:30 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:13:17.658 00:13:17.658 real 0m7.798s 00:13:17.658 user 0m3.280s 00:13:17.658 sys 0m5.168s 00:13:17.658 20:22:30 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:17.658 20:22:30 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:17.658 ************************************ 00:13:17.658 END TEST nvmf_zcopy 00:13:17.658 ************************************ 00:13:17.916 20:22:30 nvmf_rdma -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=rdma 00:13:17.916 20:22:30 nvmf_rdma -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:13:17.916 20:22:30 nvmf_rdma -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:17.916 20:22:30 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:13:17.916 ************************************ 00:13:17.916 START TEST nvmf_nmic 00:13:17.916 ************************************ 00:13:17.916 20:22:30 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=rdma 00:13:17.916 * Looking for test storage... 00:13:17.916 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:13:17.916 20:22:30 nvmf_rdma.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:13:17.916 20:22:30 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:13:17.916 20:22:30 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:17.916 20:22:30 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:17.916 20:22:30 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:17.916 20:22:30 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:17.916 20:22:30 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:17.916 20:22:30 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:17.916 20:22:30 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:17.916 20:22:30 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:17.916 20:22:30 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:17.916 20:22:30 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:17.916 20:22:30 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:13:17.916 20:22:30 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:13:17.916 20:22:30 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:17.916 20:22:30 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:17.916 20:22:30 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:17.916 20:22:30 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:17.916 20:22:30 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:13:17.916 20:22:30 nvmf_rdma.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:17.916 20:22:30 nvmf_rdma.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:17.916 20:22:30 nvmf_rdma.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:17.916 20:22:30 nvmf_rdma.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:17.916 20:22:30 nvmf_rdma.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:17.916 20:22:30 nvmf_rdma.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:17.916 20:22:30 nvmf_rdma.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:13:17.916 20:22:30 nvmf_rdma.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:17.916 20:22:30 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:13:17.916 20:22:30 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:17.916 20:22:30 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:17.916 20:22:30 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:17.916 20:22:30 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:17.916 20:22:30 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:17.916 20:22:30 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:17.916 20:22:30 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:17.916 20:22:30 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:17.916 20:22:30 nvmf_rdma.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:17.916 20:22:30 nvmf_rdma.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:17.916 20:22:30 nvmf_rdma.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:13:17.916 20:22:30 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:13:17.916 20:22:30 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:17.916 20:22:30 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:17.916 20:22:30 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:17.917 20:22:30 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:17.917 20:22:30 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:17.917 20:22:30 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:17.917 20:22:30 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:17.917 20:22:30 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:17.917 20:22:30 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:17.917 20:22:30 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@285 -- # xtrace_disable 00:13:17.917 20:22:30 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:24.486 20:22:36 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:24.486 20:22:36 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@291 -- # pci_devs=() 00:13:24.486 20:22:36 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:24.486 20:22:36 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:24.486 20:22:36 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:24.486 20:22:36 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:24.486 20:22:36 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:24.486 20:22:36 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@295 -- # net_devs=() 00:13:24.486 20:22:36 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:24.486 20:22:36 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@296 -- # e810=() 00:13:24.486 20:22:36 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@296 -- # local -ga e810 00:13:24.486 20:22:36 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@297 -- # x722=() 00:13:24.486 20:22:36 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@297 -- # local -ga x722 00:13:24.486 20:22:36 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@298 -- # mlx=() 00:13:24.486 20:22:36 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@298 -- # local -ga mlx 00:13:24.486 20:22:36 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:24.486 20:22:36 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:24.486 20:22:36 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:24.486 20:22:36 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:24.486 20:22:36 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:24.486 20:22:36 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:24.486 20:22:36 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:24.486 20:22:36 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:24.486 20:22:36 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:24.486 20:22:36 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:24.486 20:22:36 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:24.486 20:22:36 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:24.486 20:22:36 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:13:24.486 20:22:36 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:13:24.486 20:22:36 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:13:24.486 20:22:36 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:13:24.487 20:22:36 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:13:24.487 20:22:36 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:24.487 20:22:36 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:24.487 20:22:36 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:13:24.487 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:13:24.487 20:22:36 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:13:24.487 20:22:36 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:13:24.487 20:22:36 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:13:24.487 20:22:36 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:13:24.487 20:22:36 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:13:24.487 20:22:36 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:13:24.487 20:22:36 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:24.487 20:22:36 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:13:24.487 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:13:24.487 20:22:36 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:13:24.487 20:22:37 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:13:24.487 20:22:37 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:13:24.487 20:22:37 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:13:24.487 20:22:37 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:13:24.487 20:22:37 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:13:24.487 20:22:37 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:24.487 20:22:37 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:13:24.487 20:22:37 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:24.487 20:22:37 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:24.487 20:22:37 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:13:24.487 20:22:37 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:24.487 20:22:37 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:24.487 20:22:37 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:13:24.487 Found net devices under 0000:da:00.0: mlx_0_0 00:13:24.487 20:22:37 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:24.487 20:22:37 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:24.487 20:22:37 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:24.487 20:22:37 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:13:24.487 20:22:37 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:24.487 20:22:37 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:24.487 20:22:37 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:13:24.487 Found net devices under 0000:da:00.1: mlx_0_1 00:13:24.487 20:22:37 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:24.487 20:22:37 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:24.487 20:22:37 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@414 -- # is_hw=yes 00:13:24.487 20:22:37 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:24.487 20:22:37 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:13:24.487 20:22:37 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:13:24.487 20:22:37 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@420 -- # rdma_device_init 00:13:24.487 20:22:37 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:13:24.487 20:22:37 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@58 -- # uname 00:13:24.487 20:22:37 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:13:24.487 20:22:37 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@62 -- # modprobe ib_cm 00:13:24.487 20:22:37 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@63 -- # modprobe ib_core 00:13:24.487 20:22:37 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@64 -- # modprobe ib_umad 00:13:24.487 20:22:37 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:13:24.487 20:22:37 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@66 -- # modprobe iw_cm 00:13:24.487 20:22:37 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:13:24.487 20:22:37 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:13:24.487 20:22:37 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@502 -- # allocate_nic_ips 00:13:24.487 20:22:37 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:13:24.487 20:22:37 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@73 -- # get_rdma_if_list 00:13:24.487 20:22:37 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:13:24.487 20:22:37 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:13:24.487 20:22:37 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:13:24.487 20:22:37 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:13:24.487 20:22:37 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:13:24.487 20:22:37 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:24.487 20:22:37 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:24.487 20:22:37 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:13:24.487 20:22:37 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@104 -- # echo mlx_0_0 00:13:24.487 20:22:37 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@105 -- # continue 2 00:13:24.487 20:22:37 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:24.487 20:22:37 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:24.487 20:22:37 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:13:24.487 20:22:37 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:24.487 20:22:37 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:13:24.487 20:22:37 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@104 -- # echo mlx_0_1 00:13:24.487 20:22:37 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@105 -- # continue 2 00:13:24.487 20:22:37 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:13:24.487 20:22:37 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:13:24.487 20:22:37 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:13:24.487 20:22:37 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:13:24.487 20:22:37 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:24.487 20:22:37 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:24.487 20:22:37 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:13:24.487 20:22:37 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:13:24.487 20:22:37 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:13:24.487 258: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:13:24.487 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:13:24.487 altname enp218s0f0np0 00:13:24.487 altname ens818f0np0 00:13:24.487 inet 192.168.100.8/24 scope global mlx_0_0 00:13:24.487 valid_lft forever preferred_lft forever 00:13:24.487 20:22:37 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:13:24.487 20:22:37 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:13:24.487 20:22:37 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:13:24.487 20:22:37 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:13:24.487 20:22:37 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:24.487 20:22:37 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:24.487 20:22:37 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:13:24.487 20:22:37 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:13:24.487 20:22:37 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:13:24.487 259: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:13:24.487 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:13:24.487 altname enp218s0f1np1 00:13:24.487 altname ens818f1np1 00:13:24.487 inet 192.168.100.9/24 scope global mlx_0_1 00:13:24.487 valid_lft forever preferred_lft forever 00:13:24.487 20:22:37 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@422 -- # return 0 00:13:24.487 20:22:37 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:24.487 20:22:37 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:13:24.487 20:22:37 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:13:24.487 20:22:37 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:13:24.487 20:22:37 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@86 -- # get_rdma_if_list 00:13:24.487 20:22:37 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:13:24.487 20:22:37 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:13:24.487 20:22:37 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:13:24.487 20:22:37 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:13:24.487 20:22:37 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:13:24.487 20:22:37 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:24.487 20:22:37 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:24.487 20:22:37 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:13:24.487 20:22:37 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@104 -- # echo mlx_0_0 00:13:24.487 20:22:37 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@105 -- # continue 2 00:13:24.487 20:22:37 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:24.487 20:22:37 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:24.487 20:22:37 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:13:24.487 20:22:37 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:24.487 20:22:37 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:13:24.487 20:22:37 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@104 -- # echo mlx_0_1 00:13:24.487 20:22:37 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@105 -- # continue 2 00:13:24.487 20:22:37 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:13:24.487 20:22:37 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:13:24.487 20:22:37 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:13:24.487 20:22:37 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:13:24.487 20:22:37 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:24.487 20:22:37 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:24.487 20:22:37 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:13:24.487 20:22:37 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:13:24.487 20:22:37 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:13:24.487 20:22:37 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:13:24.487 20:22:37 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:24.487 20:22:37 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:24.487 20:22:37 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:13:24.487 192.168.100.9' 00:13:24.487 20:22:37 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:13:24.487 192.168.100.9' 00:13:24.488 20:22:37 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@457 -- # head -n 1 00:13:24.488 20:22:37 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:13:24.488 20:22:37 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:13:24.488 192.168.100.9' 00:13:24.488 20:22:37 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@458 -- # tail -n +2 00:13:24.488 20:22:37 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@458 -- # head -n 1 00:13:24.488 20:22:37 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:13:24.488 20:22:37 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:13:24.488 20:22:37 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:13:24.488 20:22:37 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:13:24.488 20:22:37 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:13:24.488 20:22:37 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:13:24.488 20:22:37 nvmf_rdma.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:13:24.488 20:22:37 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:24.488 20:22:37 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@720 -- # xtrace_disable 00:13:24.488 20:22:37 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:24.488 20:22:37 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=3021352 00:13:24.488 20:22:37 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 3021352 00:13:24.488 20:22:37 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:24.488 20:22:37 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@827 -- # '[' -z 3021352 ']' 00:13:24.488 20:22:37 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:24.488 20:22:37 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:24.488 20:22:37 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:24.488 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:24.488 20:22:37 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:24.488 20:22:37 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:24.488 [2024-05-16 20:22:37.268503] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:13:24.488 [2024-05-16 20:22:37.268555] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:24.488 EAL: No free 2048 kB hugepages reported on node 1 00:13:24.488 [2024-05-16 20:22:37.329988] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:24.488 [2024-05-16 20:22:37.412485] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:24.488 [2024-05-16 20:22:37.412525] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:24.488 [2024-05-16 20:22:37.412533] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:24.488 [2024-05-16 20:22:37.412538] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:24.488 [2024-05-16 20:22:37.412543] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:24.488 [2024-05-16 20:22:37.412591] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:24.488 [2024-05-16 20:22:37.412688] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:24.488 [2024-05-16 20:22:37.412775] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:24.488 [2024-05-16 20:22:37.412776] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:25.425 20:22:38 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:25.425 20:22:38 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@860 -- # return 0 00:13:25.425 20:22:38 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:25.425 20:22:38 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:25.425 20:22:38 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:25.425 20:22:38 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:25.425 20:22:38 nvmf_rdma.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:13:25.425 20:22:38 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:25.425 20:22:38 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:25.425 [2024-05-16 20:22:38.143477] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x11bf9b0/0x11c3ea0) succeed. 00:13:25.425 [2024-05-16 20:22:38.153851] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x11c0ff0/0x1205530) succeed. 00:13:25.425 20:22:38 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:25.425 20:22:38 nvmf_rdma.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:25.425 20:22:38 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:25.425 20:22:38 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:25.425 Malloc0 00:13:25.425 20:22:38 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:25.425 20:22:38 nvmf_rdma.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:25.425 20:22:38 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:25.425 20:22:38 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:25.425 20:22:38 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:25.425 20:22:38 nvmf_rdma.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:25.425 20:22:38 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:25.425 20:22:38 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:25.425 20:22:38 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:25.425 20:22:38 nvmf_rdma.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:13:25.425 20:22:38 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:25.425 20:22:38 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:25.425 [2024-05-16 20:22:38.318117] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:13:25.425 [2024-05-16 20:22:38.318525] rdma.c:3032:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:13:25.425 20:22:38 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:25.425 20:22:38 nvmf_rdma.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:13:25.425 test case1: single bdev can't be used in multiple subsystems 00:13:25.425 20:22:38 nvmf_rdma.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:13:25.425 20:22:38 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:25.425 20:22:38 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:25.425 20:22:38 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:25.425 20:22:38 nvmf_rdma.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420 00:13:25.425 20:22:38 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:25.425 20:22:38 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:25.425 20:22:38 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:25.425 20:22:38 nvmf_rdma.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:13:25.425 20:22:38 nvmf_rdma.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:13:25.425 20:22:38 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:25.425 20:22:38 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:25.425 [2024-05-16 20:22:38.342242] bdev.c:8030:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:13:25.425 [2024-05-16 20:22:38.342259] subsystem.c:2063:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:13:25.425 [2024-05-16 20:22:38.342266] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.425 request: 00:13:25.425 { 00:13:25.425 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:13:25.425 "namespace": { 00:13:25.425 "bdev_name": "Malloc0", 00:13:25.425 "no_auto_visible": false 00:13:25.425 }, 00:13:25.425 "method": "nvmf_subsystem_add_ns", 00:13:25.425 "req_id": 1 00:13:25.425 } 00:13:25.425 Got JSON-RPC error response 00:13:25.425 response: 00:13:25.425 { 00:13:25.425 "code": -32602, 00:13:25.425 "message": "Invalid parameters" 00:13:25.425 } 00:13:25.425 20:22:38 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:13:25.425 20:22:38 nvmf_rdma.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:13:25.425 20:22:38 nvmf_rdma.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:13:25.425 20:22:38 nvmf_rdma.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:13:25.425 Adding namespace failed - expected result. 00:13:25.425 20:22:38 nvmf_rdma.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:13:25.425 test case2: host connect to nvmf target in multiple paths 00:13:25.425 20:22:38 nvmf_rdma.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:13:25.425 20:22:38 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:25.425 20:22:38 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:25.425 [2024-05-16 20:22:38.354293] rdma.c:3032:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:13:25.425 20:22:38 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:25.425 20:22:38 nvmf_rdma.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:13:26.363 20:22:39 nvmf_rdma.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4421 00:13:27.299 20:22:40 nvmf_rdma.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:13:27.299 20:22:40 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1194 -- # local i=0 00:13:27.299 20:22:40 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:13:27.299 20:22:40 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:13:27.299 20:22:40 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1201 -- # sleep 2 00:13:29.831 20:22:42 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:13:29.831 20:22:42 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:13:29.831 20:22:42 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:13:29.831 20:22:42 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:13:29.831 20:22:42 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:13:29.831 20:22:42 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1204 -- # return 0 00:13:29.831 20:22:42 nvmf_rdma.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:13:29.831 [global] 00:13:29.831 thread=1 00:13:29.831 invalidate=1 00:13:29.831 rw=write 00:13:29.831 time_based=1 00:13:29.831 runtime=1 00:13:29.831 ioengine=libaio 00:13:29.831 direct=1 00:13:29.831 bs=4096 00:13:29.831 iodepth=1 00:13:29.831 norandommap=0 00:13:29.831 numjobs=1 00:13:29.831 00:13:29.831 verify_dump=1 00:13:29.831 verify_backlog=512 00:13:29.831 verify_state_save=0 00:13:29.831 do_verify=1 00:13:29.831 verify=crc32c-intel 00:13:29.831 [job0] 00:13:29.831 filename=/dev/nvme0n1 00:13:29.831 Could not set queue depth (nvme0n1) 00:13:29.831 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:29.831 fio-3.35 00:13:29.831 Starting 1 thread 00:13:30.767 00:13:30.767 job0: (groupid=0, jobs=1): err= 0: pid=3022334: Thu May 16 20:22:43 2024 00:13:30.767 read: IOPS=7162, BW=28.0MiB/s (29.3MB/s)(28.0MiB/1001msec) 00:13:30.767 slat (nsec): min=6184, max=29624, avg=7048.42, stdev=819.25 00:13:30.767 clat (nsec): min=35306, max=98545, avg=58705.95, stdev=3861.03 00:13:30.767 lat (usec): min=56, max=127, avg=65.75, stdev= 3.93 00:13:30.767 clat percentiles (nsec): 00:13:30.767 | 1.00th=[51456], 5.00th=[52992], 10.00th=[53504], 20.00th=[55040], 00:13:30.767 | 30.00th=[56576], 40.00th=[57600], 50.00th=[58624], 60.00th=[59648], 00:13:30.767 | 70.00th=[60672], 80.00th=[62208], 90.00th=[63744], 95.00th=[65280], 00:13:30.767 | 99.00th=[68096], 99.50th=[69120], 99.90th=[72192], 99.95th=[76288], 00:13:30.767 | 99.99th=[98816] 00:13:30.767 write: IOPS=7672, BW=30.0MiB/s (31.4MB/s)(30.0MiB/1001msec); 0 zone resets 00:13:30.767 slat (nsec): min=8089, max=42402, avg=8900.33, stdev=1037.90 00:13:30.767 clat (usec): min=41, max=208, avg=56.30, stdev= 4.49 00:13:30.767 lat (usec): min=55, max=225, avg=65.20, stdev= 4.71 00:13:30.767 clat percentiles (usec): 00:13:30.767 | 1.00th=[ 49], 5.00th=[ 51], 10.00th=[ 52], 20.00th=[ 53], 00:13:30.767 | 30.00th=[ 54], 40.00th=[ 56], 50.00th=[ 57], 60.00th=[ 58], 00:13:30.767 | 70.00th=[ 59], 80.00th=[ 61], 90.00th=[ 62], 95.00th=[ 64], 00:13:30.767 | 99.00th=[ 67], 99.50th=[ 68], 99.90th=[ 77], 99.95th=[ 88], 00:13:30.767 | 99.99th=[ 208] 00:13:30.767 bw ( KiB/s): min=30704, max=30704, per=100.00%, avg=30704.00, stdev= 0.00, samples=1 00:13:30.767 iops : min= 7676, max= 7676, avg=7676.00, stdev= 0.00, samples=1 00:13:30.767 lat (usec) : 50=1.89%, 100=98.10%, 250=0.01% 00:13:30.767 cpu : usr=8.50%, sys=15.30%, ctx=14850, majf=0, minf=2 00:13:30.767 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:30.767 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:30.767 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:30.767 issued rwts: total=7170,7680,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:30.767 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:30.767 00:13:30.767 Run status group 0 (all jobs): 00:13:30.767 READ: bw=28.0MiB/s (29.3MB/s), 28.0MiB/s-28.0MiB/s (29.3MB/s-29.3MB/s), io=28.0MiB (29.4MB), run=1001-1001msec 00:13:30.767 WRITE: bw=30.0MiB/s (31.4MB/s), 30.0MiB/s-30.0MiB/s (31.4MB/s-31.4MB/s), io=30.0MiB (31.5MB), run=1001-1001msec 00:13:30.767 00:13:30.767 Disk stats (read/write): 00:13:30.767 nvme0n1: ios=6706/6678, merge=0/0, ticks=356/324, in_queue=680, util=90.68% 00:13:30.767 20:22:43 nvmf_rdma.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:32.677 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:13:32.677 20:22:45 nvmf_rdma.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:32.677 20:22:45 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1215 -- # local i=0 00:13:32.677 20:22:45 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:13:32.677 20:22:45 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:32.677 20:22:45 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:13:32.677 20:22:45 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:32.677 20:22:45 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1227 -- # return 0 00:13:32.677 20:22:45 nvmf_rdma.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:13:32.677 20:22:45 nvmf_rdma.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:13:32.677 20:22:45 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:32.677 20:22:45 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:13:32.677 20:22:45 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:13:32.677 20:22:45 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:13:32.677 20:22:45 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:13:32.677 20:22:45 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:32.677 20:22:45 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:13:32.677 rmmod nvme_rdma 00:13:32.935 rmmod nvme_fabrics 00:13:32.935 20:22:45 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:32.935 20:22:45 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:13:32.935 20:22:45 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:13:32.935 20:22:45 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 3021352 ']' 00:13:32.935 20:22:45 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 3021352 00:13:32.935 20:22:45 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@946 -- # '[' -z 3021352 ']' 00:13:32.935 20:22:45 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@950 -- # kill -0 3021352 00:13:32.935 20:22:45 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@951 -- # uname 00:13:32.935 20:22:45 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:32.935 20:22:45 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3021352 00:13:32.935 20:22:45 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:13:32.935 20:22:45 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:13:32.935 20:22:45 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3021352' 00:13:32.935 killing process with pid 3021352 00:13:32.935 20:22:45 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@965 -- # kill 3021352 00:13:32.935 [2024-05-16 20:22:45.745102] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:13:32.935 20:22:45 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@970 -- # wait 3021352 00:13:32.935 [2024-05-16 20:22:45.828531] rdma.c:2885:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:13:33.193 20:22:46 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:33.193 20:22:46 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:13:33.193 00:13:33.193 real 0m15.314s 00:13:33.193 user 0m41.835s 00:13:33.193 sys 0m5.649s 00:13:33.193 20:22:46 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:33.193 20:22:46 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:33.193 ************************************ 00:13:33.193 END TEST nvmf_nmic 00:13:33.193 ************************************ 00:13:33.193 20:22:46 nvmf_rdma -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=rdma 00:13:33.193 20:22:46 nvmf_rdma -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:13:33.193 20:22:46 nvmf_rdma -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:33.193 20:22:46 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:13:33.193 ************************************ 00:13:33.193 START TEST nvmf_fio_target 00:13:33.193 ************************************ 00:13:33.193 20:22:46 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=rdma 00:13:33.193 * Looking for test storage... 00:13:33.193 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:13:33.453 20:22:46 nvmf_rdma.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:13:33.453 20:22:46 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:13:33.453 20:22:46 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:33.453 20:22:46 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:33.453 20:22:46 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:33.453 20:22:46 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:33.453 20:22:46 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:33.453 20:22:46 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:33.453 20:22:46 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:33.453 20:22:46 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:33.453 20:22:46 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:33.453 20:22:46 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:33.453 20:22:46 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:13:33.453 20:22:46 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:13:33.453 20:22:46 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:33.453 20:22:46 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:33.453 20:22:46 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:33.453 20:22:46 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:33.453 20:22:46 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:13:33.453 20:22:46 nvmf_rdma.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:33.453 20:22:46 nvmf_rdma.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:33.453 20:22:46 nvmf_rdma.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:33.453 20:22:46 nvmf_rdma.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:33.453 20:22:46 nvmf_rdma.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:33.453 20:22:46 nvmf_rdma.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:33.453 20:22:46 nvmf_rdma.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:13:33.453 20:22:46 nvmf_rdma.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:33.453 20:22:46 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:13:33.453 20:22:46 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:33.453 20:22:46 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:33.453 20:22:46 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:33.453 20:22:46 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:33.453 20:22:46 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:33.453 20:22:46 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:33.453 20:22:46 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:33.453 20:22:46 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:33.453 20:22:46 nvmf_rdma.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:33.453 20:22:46 nvmf_rdma.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:33.453 20:22:46 nvmf_rdma.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:13:33.453 20:22:46 nvmf_rdma.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:13:33.453 20:22:46 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:13:33.453 20:22:46 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:33.453 20:22:46 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:33.453 20:22:46 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:33.453 20:22:46 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:33.453 20:22:46 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:33.453 20:22:46 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:33.453 20:22:46 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:33.453 20:22:46 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:33.454 20:22:46 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:33.454 20:22:46 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@285 -- # xtrace_disable 00:13:33.454 20:22:46 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:13:40.033 20:22:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:40.033 20:22:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@291 -- # pci_devs=() 00:13:40.033 20:22:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:40.033 20:22:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:40.033 20:22:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:40.033 20:22:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:40.033 20:22:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:40.033 20:22:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@295 -- # net_devs=() 00:13:40.033 20:22:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:40.033 20:22:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@296 -- # e810=() 00:13:40.033 20:22:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@296 -- # local -ga e810 00:13:40.033 20:22:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@297 -- # x722=() 00:13:40.033 20:22:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@297 -- # local -ga x722 00:13:40.033 20:22:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@298 -- # mlx=() 00:13:40.033 20:22:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@298 -- # local -ga mlx 00:13:40.033 20:22:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:40.033 20:22:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:40.033 20:22:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:40.033 20:22:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:40.033 20:22:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:40.033 20:22:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:40.033 20:22:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:40.033 20:22:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:40.033 20:22:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:40.033 20:22:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:40.033 20:22:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:40.033 20:22:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:40.033 20:22:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:13:40.033 20:22:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:13:40.033 20:22:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:13:40.033 20:22:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:13:40.033 20:22:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:13:40.033 20:22:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:40.033 20:22:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:40.033 20:22:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:13:40.033 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:13:40.033 20:22:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:13:40.033 20:22:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:13:40.033 20:22:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:13:40.033 20:22:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:13:40.033 20:22:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:13:40.033 20:22:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:13:40.033 20:22:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:40.033 20:22:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:13:40.033 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:13:40.033 20:22:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:13:40.033 20:22:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:13:40.033 20:22:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:13:40.033 20:22:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:13:40.033 20:22:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:13:40.033 20:22:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:13:40.033 20:22:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:40.033 20:22:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:13:40.033 20:22:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:40.033 20:22:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:40.033 20:22:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:13:40.033 20:22:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:40.033 20:22:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:40.033 20:22:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:13:40.033 Found net devices under 0000:da:00.0: mlx_0_0 00:13:40.033 20:22:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:40.033 20:22:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:40.033 20:22:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:40.033 20:22:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:13:40.033 20:22:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:40.033 20:22:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:40.033 20:22:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:13:40.033 Found net devices under 0000:da:00.1: mlx_0_1 00:13:40.033 20:22:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:40.033 20:22:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:40.033 20:22:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@414 -- # is_hw=yes 00:13:40.033 20:22:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:40.033 20:22:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:13:40.033 20:22:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:13:40.033 20:22:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@420 -- # rdma_device_init 00:13:40.033 20:22:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:13:40.033 20:22:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@58 -- # uname 00:13:40.033 20:22:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:13:40.033 20:22:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@62 -- # modprobe ib_cm 00:13:40.033 20:22:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@63 -- # modprobe ib_core 00:13:40.033 20:22:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@64 -- # modprobe ib_umad 00:13:40.033 20:22:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:13:40.033 20:22:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@66 -- # modprobe iw_cm 00:13:40.033 20:22:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:13:40.033 20:22:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:13:40.033 20:22:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@502 -- # allocate_nic_ips 00:13:40.033 20:22:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:13:40.033 20:22:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@73 -- # get_rdma_if_list 00:13:40.033 20:22:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:13:40.033 20:22:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:13:40.033 20:22:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:13:40.033 20:22:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:13:40.033 20:22:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:13:40.033 20:22:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:40.033 20:22:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:40.033 20:22:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:13:40.033 20:22:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@104 -- # echo mlx_0_0 00:13:40.033 20:22:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@105 -- # continue 2 00:13:40.033 20:22:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:40.033 20:22:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:40.033 20:22:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:13:40.033 20:22:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:40.033 20:22:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:13:40.033 20:22:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@104 -- # echo mlx_0_1 00:13:40.033 20:22:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@105 -- # continue 2 00:13:40.033 20:22:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:13:40.033 20:22:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:13:40.033 20:22:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:13:40.033 20:22:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:13:40.033 20:22:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:40.033 20:22:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:40.033 20:22:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:13:40.033 20:22:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:13:40.033 20:22:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:13:40.033 258: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:13:40.033 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:13:40.033 altname enp218s0f0np0 00:13:40.033 altname ens818f0np0 00:13:40.033 inet 192.168.100.8/24 scope global mlx_0_0 00:13:40.033 valid_lft forever preferred_lft forever 00:13:40.033 20:22:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:13:40.033 20:22:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:13:40.033 20:22:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:13:40.033 20:22:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:13:40.034 20:22:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:40.034 20:22:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:40.034 20:22:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:13:40.034 20:22:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:13:40.034 20:22:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:13:40.034 259: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:13:40.034 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:13:40.034 altname enp218s0f1np1 00:13:40.034 altname ens818f1np1 00:13:40.034 inet 192.168.100.9/24 scope global mlx_0_1 00:13:40.034 valid_lft forever preferred_lft forever 00:13:40.034 20:22:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@422 -- # return 0 00:13:40.034 20:22:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:40.034 20:22:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:13:40.034 20:22:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:13:40.034 20:22:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:13:40.034 20:22:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@86 -- # get_rdma_if_list 00:13:40.034 20:22:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:13:40.034 20:22:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:13:40.034 20:22:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:13:40.034 20:22:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:13:40.034 20:22:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:13:40.034 20:22:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:40.034 20:22:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:40.034 20:22:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:13:40.034 20:22:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@104 -- # echo mlx_0_0 00:13:40.034 20:22:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@105 -- # continue 2 00:13:40.034 20:22:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:40.034 20:22:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:40.034 20:22:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:13:40.034 20:22:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:40.034 20:22:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:13:40.034 20:22:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@104 -- # echo mlx_0_1 00:13:40.034 20:22:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@105 -- # continue 2 00:13:40.034 20:22:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:13:40.034 20:22:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:13:40.034 20:22:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:13:40.034 20:22:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:13:40.034 20:22:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:40.034 20:22:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:40.034 20:22:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:13:40.034 20:22:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:13:40.034 20:22:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:13:40.034 20:22:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:13:40.034 20:22:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:40.034 20:22:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:40.034 20:22:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:13:40.034 192.168.100.9' 00:13:40.034 20:22:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:13:40.034 192.168.100.9' 00:13:40.034 20:22:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@457 -- # head -n 1 00:13:40.034 20:22:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:13:40.034 20:22:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:13:40.034 192.168.100.9' 00:13:40.034 20:22:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@458 -- # tail -n +2 00:13:40.034 20:22:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@458 -- # head -n 1 00:13:40.034 20:22:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:13:40.034 20:22:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:13:40.034 20:22:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:13:40.034 20:22:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:13:40.034 20:22:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:13:40.034 20:22:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:13:40.034 20:22:52 nvmf_rdma.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:13:40.034 20:22:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:40.034 20:22:52 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@720 -- # xtrace_disable 00:13:40.034 20:22:52 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:13:40.034 20:22:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=3026374 00:13:40.034 20:22:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 3026374 00:13:40.034 20:22:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:40.034 20:22:52 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@827 -- # '[' -z 3026374 ']' 00:13:40.034 20:22:52 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:40.034 20:22:52 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:40.034 20:22:52 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:40.034 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:40.034 20:22:52 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:40.034 20:22:52 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:13:40.034 [2024-05-16 20:22:52.368971] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:13:40.034 [2024-05-16 20:22:52.369025] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:40.034 EAL: No free 2048 kB hugepages reported on node 1 00:13:40.034 [2024-05-16 20:22:52.432016] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:40.034 [2024-05-16 20:22:52.511899] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:40.034 [2024-05-16 20:22:52.511932] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:40.034 [2024-05-16 20:22:52.511939] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:40.034 [2024-05-16 20:22:52.511946] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:40.034 [2024-05-16 20:22:52.511951] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:40.034 [2024-05-16 20:22:52.512001] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:40.034 [2024-05-16 20:22:52.512020] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:40.034 [2024-05-16 20:22:52.512106] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:40.034 [2024-05-16 20:22:52.512107] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:40.292 20:22:53 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:40.292 20:22:53 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@860 -- # return 0 00:13:40.292 20:22:53 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:40.292 20:22:53 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:40.292 20:22:53 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:13:40.292 20:22:53 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:40.292 20:22:53 nvmf_rdma.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:13:40.550 [2024-05-16 20:22:53.384819] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xa7e9b0/0xa82ea0) succeed. 00:13:40.550 [2024-05-16 20:22:53.394989] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xa7fff0/0xac4530) succeed. 00:13:40.550 20:22:53 nvmf_rdma.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:40.820 20:22:53 nvmf_rdma.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:13:40.820 20:22:53 nvmf_rdma.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:41.081 20:22:53 nvmf_rdma.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:13:41.081 20:22:53 nvmf_rdma.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:41.339 20:22:54 nvmf_rdma.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:13:41.339 20:22:54 nvmf_rdma.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:41.596 20:22:54 nvmf_rdma.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:13:41.596 20:22:54 nvmf_rdma.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:13:41.596 20:22:54 nvmf_rdma.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:41.872 20:22:54 nvmf_rdma.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:13:41.872 20:22:54 nvmf_rdma.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:42.178 20:22:54 nvmf_rdma.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:13:42.178 20:22:54 nvmf_rdma.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:42.178 20:22:55 nvmf_rdma.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:13:42.178 20:22:55 nvmf_rdma.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:13:42.436 20:22:55 nvmf_rdma.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:42.694 20:22:55 nvmf_rdma.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:13:42.694 20:22:55 nvmf_rdma.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:42.694 20:22:55 nvmf_rdma.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:13:42.694 20:22:55 nvmf_rdma.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:42.953 20:22:55 nvmf_rdma.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:13:43.211 [2024-05-16 20:22:55.981101] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:13:43.211 [2024-05-16 20:22:55.981531] rdma.c:3032:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:13:43.211 20:22:56 nvmf_rdma.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:13:43.211 20:22:56 nvmf_rdma.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:13:43.469 20:22:56 nvmf_rdma.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:13:44.406 20:22:57 nvmf_rdma.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:13:44.406 20:22:57 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1194 -- # local i=0 00:13:44.406 20:22:57 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:13:44.406 20:22:57 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1196 -- # [[ -n 4 ]] 00:13:44.406 20:22:57 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1197 -- # nvme_device_counter=4 00:13:44.406 20:22:57 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1201 -- # sleep 2 00:13:46.938 20:22:59 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:13:46.938 20:22:59 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:13:46.938 20:22:59 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:13:46.938 20:22:59 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1203 -- # nvme_devices=4 00:13:46.938 20:22:59 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:13:46.938 20:22:59 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1204 -- # return 0 00:13:46.938 20:22:59 nvmf_rdma.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:13:46.938 [global] 00:13:46.938 thread=1 00:13:46.938 invalidate=1 00:13:46.938 rw=write 00:13:46.938 time_based=1 00:13:46.938 runtime=1 00:13:46.938 ioengine=libaio 00:13:46.938 direct=1 00:13:46.938 bs=4096 00:13:46.938 iodepth=1 00:13:46.938 norandommap=0 00:13:46.938 numjobs=1 00:13:46.938 00:13:46.938 verify_dump=1 00:13:46.938 verify_backlog=512 00:13:46.938 verify_state_save=0 00:13:46.938 do_verify=1 00:13:46.938 verify=crc32c-intel 00:13:46.938 [job0] 00:13:46.938 filename=/dev/nvme0n1 00:13:46.938 [job1] 00:13:46.938 filename=/dev/nvme0n2 00:13:46.938 [job2] 00:13:46.938 filename=/dev/nvme0n3 00:13:46.938 [job3] 00:13:46.938 filename=/dev/nvme0n4 00:13:46.938 Could not set queue depth (nvme0n1) 00:13:46.938 Could not set queue depth (nvme0n2) 00:13:46.938 Could not set queue depth (nvme0n3) 00:13:46.938 Could not set queue depth (nvme0n4) 00:13:46.938 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:46.938 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:46.938 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:46.938 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:46.938 fio-3.35 00:13:46.938 Starting 4 threads 00:13:48.341 00:13:48.341 job0: (groupid=0, jobs=1): err= 0: pid=3027808: Thu May 16 20:23:00 2024 00:13:48.341 read: IOPS=3399, BW=13.3MiB/s (13.9MB/s)(13.3MiB/1001msec) 00:13:48.341 slat (nsec): min=6352, max=25181, avg=7289.58, stdev=800.00 00:13:48.341 clat (usec): min=66, max=226, avg=135.51, stdev=21.17 00:13:48.341 lat (usec): min=73, max=233, avg=142.80, stdev=21.20 00:13:48.341 clat percentiles (usec): 00:13:48.341 | 1.00th=[ 85], 5.00th=[ 100], 10.00th=[ 116], 20.00th=[ 122], 00:13:48.341 | 30.00th=[ 125], 40.00th=[ 128], 50.00th=[ 133], 60.00th=[ 137], 00:13:48.341 | 70.00th=[ 145], 80.00th=[ 157], 90.00th=[ 165], 95.00th=[ 172], 00:13:48.341 | 99.00th=[ 188], 99.50th=[ 196], 99.90th=[ 217], 99.95th=[ 221], 00:13:48.341 | 99.99th=[ 227] 00:13:48.341 write: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec); 0 zone resets 00:13:48.341 slat (nsec): min=8086, max=44878, avg=9228.84, stdev=1229.08 00:13:48.341 clat (usec): min=63, max=348, avg=129.94, stdev=24.46 00:13:48.341 lat (usec): min=73, max=358, avg=139.17, stdev=24.55 00:13:48.341 clat percentiles (usec): 00:13:48.341 | 1.00th=[ 77], 5.00th=[ 85], 10.00th=[ 104], 20.00th=[ 115], 00:13:48.341 | 30.00th=[ 119], 40.00th=[ 123], 50.00th=[ 126], 60.00th=[ 133], 00:13:48.341 | 70.00th=[ 145], 80.00th=[ 153], 90.00th=[ 159], 95.00th=[ 165], 00:13:48.341 | 99.00th=[ 190], 99.50th=[ 204], 99.90th=[ 289], 99.95th=[ 338], 00:13:48.341 | 99.99th=[ 351] 00:13:48.341 bw ( KiB/s): min=15320, max=15320, per=22.57%, avg=15320.00, stdev= 0.00, samples=1 00:13:48.341 iops : min= 3830, max= 3830, avg=3830.00, stdev= 0.00, samples=1 00:13:48.341 lat (usec) : 100=6.98%, 250=92.96%, 500=0.06% 00:13:48.341 cpu : usr=4.50%, sys=7.50%, ctx=6987, majf=0, minf=1 00:13:48.341 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:48.341 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:48.341 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:48.341 issued rwts: total=3403,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:48.341 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:48.341 job1: (groupid=0, jobs=1): err= 0: pid=3027822: Thu May 16 20:23:00 2024 00:13:48.341 read: IOPS=5130, BW=20.0MiB/s (21.0MB/s)(20.1MiB/1001msec) 00:13:48.341 slat (nsec): min=6177, max=39487, avg=7179.11, stdev=1248.10 00:13:48.341 clat (usec): min=60, max=207, avg=87.13, stdev=20.46 00:13:48.341 lat (usec): min=69, max=213, avg=94.31, stdev=20.63 00:13:48.341 clat percentiles (usec): 00:13:48.341 | 1.00th=[ 68], 5.00th=[ 71], 10.00th=[ 73], 20.00th=[ 75], 00:13:48.341 | 30.00th=[ 76], 40.00th=[ 78], 50.00th=[ 80], 60.00th=[ 82], 00:13:48.341 | 70.00th=[ 85], 80.00th=[ 94], 90.00th=[ 124], 95.00th=[ 129], 00:13:48.341 | 99.00th=[ 159], 99.50th=[ 163], 99.90th=[ 174], 99.95th=[ 178], 00:13:48.341 | 99.99th=[ 208] 00:13:48.341 write: IOPS=5626, BW=22.0MiB/s (23.0MB/s)(22.0MiB/1001msec); 0 zone resets 00:13:48.341 slat (nsec): min=7976, max=35920, avg=8907.62, stdev=1134.25 00:13:48.341 clat (usec): min=60, max=226, avg=78.99, stdev=15.35 00:13:48.341 lat (usec): min=68, max=243, avg=87.90, stdev=15.56 00:13:48.341 clat percentiles (usec): 00:13:48.341 | 1.00th=[ 65], 5.00th=[ 68], 10.00th=[ 69], 20.00th=[ 71], 00:13:48.341 | 30.00th=[ 73], 40.00th=[ 74], 50.00th=[ 76], 60.00th=[ 77], 00:13:48.341 | 70.00th=[ 79], 80.00th=[ 81], 90.00th=[ 89], 95.00th=[ 120], 00:13:48.341 | 99.00th=[ 135], 99.50th=[ 155], 99.90th=[ 167], 99.95th=[ 169], 00:13:48.341 | 99.99th=[ 227] 00:13:48.341 bw ( KiB/s): min=24576, max=24576, per=36.20%, avg=24576.00, stdev= 0.00, samples=1 00:13:48.341 iops : min= 6144, max= 6144, avg=6144.00, stdev= 0.00, samples=1 00:13:48.341 lat (usec) : 100=86.20%, 250=13.80% 00:13:48.341 cpu : usr=4.70%, sys=12.90%, ctx=10768, majf=0, minf=2 00:13:48.342 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:48.342 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:48.342 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:48.342 issued rwts: total=5136,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:48.342 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:48.342 job2: (groupid=0, jobs=1): err= 0: pid=3027839: Thu May 16 20:23:00 2024 00:13:48.342 read: IOPS=3390, BW=13.2MiB/s (13.9MB/s)(13.3MiB/1001msec) 00:13:48.342 slat (nsec): min=3019, max=29831, avg=7420.02, stdev=1093.04 00:13:48.342 clat (usec): min=78, max=224, avg=135.47, stdev=19.82 00:13:48.342 lat (usec): min=84, max=231, avg=142.89, stdev=19.82 00:13:48.342 clat percentiles (usec): 00:13:48.342 | 1.00th=[ 91], 5.00th=[ 106], 10.00th=[ 117], 20.00th=[ 122], 00:13:48.342 | 30.00th=[ 126], 40.00th=[ 128], 50.00th=[ 133], 60.00th=[ 137], 00:13:48.342 | 70.00th=[ 143], 80.00th=[ 155], 90.00th=[ 163], 95.00th=[ 169], 00:13:48.342 | 99.00th=[ 190], 99.50th=[ 200], 99.90th=[ 219], 99.95th=[ 221], 00:13:48.342 | 99.99th=[ 225] 00:13:48.342 write: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec); 0 zone resets 00:13:48.342 slat (nsec): min=3240, max=37136, avg=9452.23, stdev=1625.43 00:13:48.342 clat (usec): min=65, max=350, avg=130.06, stdev=21.65 00:13:48.342 lat (usec): min=70, max=360, avg=139.51, stdev=21.81 00:13:48.342 clat percentiles (usec): 00:13:48.342 | 1.00th=[ 83], 5.00th=[ 94], 10.00th=[ 110], 20.00th=[ 116], 00:13:48.342 | 30.00th=[ 119], 40.00th=[ 123], 50.00th=[ 126], 60.00th=[ 131], 00:13:48.342 | 70.00th=[ 143], 80.00th=[ 151], 90.00th=[ 157], 95.00th=[ 163], 00:13:48.342 | 99.00th=[ 188], 99.50th=[ 198], 99.90th=[ 237], 99.95th=[ 297], 00:13:48.342 | 99.99th=[ 351] 00:13:48.342 bw ( KiB/s): min=15272, max=15272, per=22.49%, avg=15272.00, stdev= 0.00, samples=1 00:13:48.342 iops : min= 3818, max= 3818, avg=3818.00, stdev= 0.00, samples=1 00:13:48.342 lat (usec) : 100=5.09%, 250=94.88%, 500=0.03% 00:13:48.342 cpu : usr=4.00%, sys=8.00%, ctx=6978, majf=0, minf=1 00:13:48.342 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:48.342 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:48.342 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:48.342 issued rwts: total=3394,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:48.342 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:48.342 job3: (groupid=0, jobs=1): err= 0: pid=3027845: Thu May 16 20:23:00 2024 00:13:48.342 read: IOPS=4091, BW=16.0MiB/s (16.8MB/s)(16.0MiB/1001msec) 00:13:48.342 slat (nsec): min=3808, max=25802, avg=7510.97, stdev=1647.00 00:13:48.342 clat (usec): min=71, max=976, avg=112.63, stdev=34.12 00:13:48.342 lat (usec): min=78, max=983, avg=120.14, stdev=34.78 00:13:48.342 clat percentiles (usec): 00:13:48.342 | 1.00th=[ 79], 5.00th=[ 83], 10.00th=[ 84], 20.00th=[ 87], 00:13:48.342 | 30.00th=[ 90], 40.00th=[ 93], 50.00th=[ 96], 60.00th=[ 110], 00:13:48.342 | 70.00th=[ 127], 80.00th=[ 149], 90.00th=[ 159], 95.00th=[ 167], 00:13:48.342 | 99.00th=[ 204], 99.50th=[ 215], 99.90th=[ 227], 99.95th=[ 233], 00:13:48.342 | 99.99th=[ 979] 00:13:48.342 write: IOPS=4185, BW=16.4MiB/s (17.1MB/s)(16.4MiB/1001msec); 0 zone resets 00:13:48.342 slat (nsec): min=8289, max=40271, avg=9959.58, stdev=1843.64 00:13:48.342 clat (usec): min=63, max=215, avg=106.00, stdev=27.82 00:13:48.342 lat (usec): min=77, max=224, avg=115.96, stdev=28.44 00:13:48.342 clat percentiles (usec): 00:13:48.342 | 1.00th=[ 76], 5.00th=[ 79], 10.00th=[ 81], 20.00th=[ 84], 00:13:48.342 | 30.00th=[ 86], 40.00th=[ 89], 50.00th=[ 93], 60.00th=[ 103], 00:13:48.342 | 70.00th=[ 120], 80.00th=[ 130], 90.00th=[ 151], 95.00th=[ 157], 00:13:48.342 | 99.00th=[ 188], 99.50th=[ 198], 99.90th=[ 210], 99.95th=[ 212], 00:13:48.342 | 99.99th=[ 217] 00:13:48.342 bw ( KiB/s): min=18080, max=18080, per=26.63%, avg=18080.00, stdev= 0.00, samples=1 00:13:48.342 iops : min= 4520, max= 4520, avg=4520.00, stdev= 0.00, samples=1 00:13:48.342 lat (usec) : 100=56.88%, 250=43.11%, 1000=0.01% 00:13:48.342 cpu : usr=5.40%, sys=8.10%, ctx=8286, majf=0, minf=1 00:13:48.342 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:48.342 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:48.342 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:48.342 issued rwts: total=4096,4190,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:48.342 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:48.342 00:13:48.342 Run status group 0 (all jobs): 00:13:48.342 READ: bw=62.5MiB/s (65.6MB/s), 13.2MiB/s-20.0MiB/s (13.9MB/s-21.0MB/s), io=62.6MiB (65.7MB), run=1001-1001msec 00:13:48.342 WRITE: bw=66.3MiB/s (69.5MB/s), 14.0MiB/s-22.0MiB/s (14.7MB/s-23.0MB/s), io=66.4MiB (69.6MB), run=1001-1001msec 00:13:48.342 00:13:48.342 Disk stats (read/write): 00:13:48.342 nvme0n1: ios=3085/3072, merge=0/0, ticks=405/374, in_queue=779, util=86.57% 00:13:48.342 nvme0n2: ios=4525/4608, merge=0/0, ticks=364/321, in_queue=685, util=87.22% 00:13:48.342 nvme0n3: ios=3032/3072, merge=0/0, ticks=382/368, in_queue=750, util=89.30% 00:13:48.342 nvme0n4: ios=3584/3834, merge=0/0, ticks=365/369, in_queue=734, util=89.86% 00:13:48.342 20:23:00 nvmf_rdma.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:13:48.342 [global] 00:13:48.342 thread=1 00:13:48.342 invalidate=1 00:13:48.342 rw=randwrite 00:13:48.342 time_based=1 00:13:48.342 runtime=1 00:13:48.342 ioengine=libaio 00:13:48.342 direct=1 00:13:48.342 bs=4096 00:13:48.342 iodepth=1 00:13:48.342 norandommap=0 00:13:48.342 numjobs=1 00:13:48.342 00:13:48.342 verify_dump=1 00:13:48.342 verify_backlog=512 00:13:48.342 verify_state_save=0 00:13:48.342 do_verify=1 00:13:48.342 verify=crc32c-intel 00:13:48.342 [job0] 00:13:48.342 filename=/dev/nvme0n1 00:13:48.342 [job1] 00:13:48.342 filename=/dev/nvme0n2 00:13:48.342 [job2] 00:13:48.342 filename=/dev/nvme0n3 00:13:48.342 [job3] 00:13:48.342 filename=/dev/nvme0n4 00:13:48.342 Could not set queue depth (nvme0n1) 00:13:48.342 Could not set queue depth (nvme0n2) 00:13:48.342 Could not set queue depth (nvme0n3) 00:13:48.342 Could not set queue depth (nvme0n4) 00:13:48.342 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:48.342 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:48.342 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:48.342 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:48.342 fio-3.35 00:13:48.342 Starting 4 threads 00:13:49.721 00:13:49.721 job0: (groupid=0, jobs=1): err= 0: pid=3028241: Thu May 16 20:23:02 2024 00:13:49.721 read: IOPS=5114, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1001msec) 00:13:49.721 slat (nsec): min=6057, max=18160, avg=7015.91, stdev=602.01 00:13:49.721 clat (usec): min=67, max=273, avg=85.56, stdev= 7.50 00:13:49.721 lat (usec): min=74, max=280, avg=92.58, stdev= 7.53 00:13:49.721 clat percentiles (usec): 00:13:49.721 | 1.00th=[ 73], 5.00th=[ 76], 10.00th=[ 78], 20.00th=[ 80], 00:13:49.721 | 30.00th=[ 82], 40.00th=[ 84], 50.00th=[ 85], 60.00th=[ 87], 00:13:49.721 | 70.00th=[ 89], 80.00th=[ 91], 90.00th=[ 95], 95.00th=[ 98], 00:13:49.721 | 99.00th=[ 106], 99.50th=[ 111], 99.90th=[ 118], 99.95th=[ 122], 00:13:49.721 | 99.99th=[ 273] 00:13:49.721 write: IOPS=5567, BW=21.7MiB/s (22.8MB/s)(21.8MiB/1001msec); 0 zone resets 00:13:49.721 slat (nsec): min=7732, max=75276, avg=8611.01, stdev=1223.73 00:13:49.721 clat (usec): min=64, max=268, avg=82.18, stdev= 7.23 00:13:49.721 lat (usec): min=72, max=277, avg=90.79, stdev= 7.40 00:13:49.721 clat percentiles (usec): 00:13:49.721 | 1.00th=[ 70], 5.00th=[ 73], 10.00th=[ 75], 20.00th=[ 77], 00:13:49.721 | 30.00th=[ 79], 40.00th=[ 80], 50.00th=[ 82], 60.00th=[ 84], 00:13:49.721 | 70.00th=[ 85], 80.00th=[ 88], 90.00th=[ 91], 95.00th=[ 95], 00:13:49.721 | 99.00th=[ 102], 99.50th=[ 106], 99.90th=[ 113], 99.95th=[ 117], 00:13:49.721 | 99.99th=[ 269] 00:13:49.721 bw ( KiB/s): min=22064, max=22064, per=29.18%, avg=22064.00, stdev= 0.00, samples=1 00:13:49.721 iops : min= 5516, max= 5516, avg=5516.00, stdev= 0.00, samples=1 00:13:49.721 lat (usec) : 100=97.48%, 250=2.50%, 500=0.02% 00:13:49.721 cpu : usr=5.70%, sys=11.40%, ctx=10694, majf=0, minf=1 00:13:49.721 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:49.721 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:49.721 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:49.721 issued rwts: total=5120,5573,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:49.721 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:49.721 job1: (groupid=0, jobs=1): err= 0: pid=3028252: Thu May 16 20:23:02 2024 00:13:49.721 read: IOPS=4091, BW=16.0MiB/s (16.8MB/s)(16.0MiB/1001msec) 00:13:49.721 slat (nsec): min=6230, max=21482, avg=7145.45, stdev=668.65 00:13:49.721 clat (usec): min=68, max=195, avg=114.05, stdev=28.88 00:13:49.721 lat (usec): min=74, max=202, avg=121.19, stdev=28.96 00:13:49.721 clat percentiles (usec): 00:13:49.721 | 1.00th=[ 72], 5.00th=[ 76], 10.00th=[ 78], 20.00th=[ 83], 00:13:49.721 | 30.00th=[ 88], 40.00th=[ 101], 50.00th=[ 123], 60.00th=[ 128], 00:13:49.721 | 70.00th=[ 133], 80.00th=[ 137], 90.00th=[ 145], 95.00th=[ 167], 00:13:49.721 | 99.00th=[ 182], 99.50th=[ 186], 99.90th=[ 192], 99.95th=[ 194], 00:13:49.721 | 99.99th=[ 196] 00:13:49.721 write: IOPS=4118, BW=16.1MiB/s (16.9MB/s)(16.1MiB/1001msec); 0 zone resets 00:13:49.721 slat (nsec): min=8003, max=37814, avg=8923.67, stdev=1052.92 00:13:49.721 clat (usec): min=62, max=278, avg=109.01, stdev=30.19 00:13:49.721 lat (usec): min=71, max=294, avg=117.93, stdev=30.29 00:13:49.721 clat percentiles (usec): 00:13:49.721 | 1.00th=[ 69], 5.00th=[ 72], 10.00th=[ 75], 20.00th=[ 78], 00:13:49.721 | 30.00th=[ 82], 40.00th=[ 89], 50.00th=[ 117], 60.00th=[ 124], 00:13:49.721 | 70.00th=[ 129], 80.00th=[ 133], 90.00th=[ 143], 95.00th=[ 165], 00:13:49.721 | 99.00th=[ 182], 99.50th=[ 184], 99.90th=[ 192], 99.95th=[ 196], 00:13:49.721 | 99.99th=[ 277] 00:13:49.721 bw ( KiB/s): min=20480, max=20480, per=27.08%, avg=20480.00, stdev= 0.00, samples=1 00:13:49.721 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=1 00:13:49.721 lat (usec) : 100=42.41%, 250=57.57%, 500=0.01% 00:13:49.721 cpu : usr=4.50%, sys=9.30%, ctx=8219, majf=0, minf=1 00:13:49.721 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:49.721 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:49.721 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:49.721 issued rwts: total=4096,4123,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:49.721 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:49.721 job2: (groupid=0, jobs=1): err= 0: pid=3028271: Thu May 16 20:23:02 2024 00:13:49.721 read: IOPS=3755, BW=14.7MiB/s (15.4MB/s)(14.7MiB/1001msec) 00:13:49.721 slat (nsec): min=6106, max=26330, avg=7492.64, stdev=981.80 00:13:49.721 clat (usec): min=76, max=198, avg=120.13, stdev=21.17 00:13:49.721 lat (usec): min=83, max=205, avg=127.63, stdev=21.26 00:13:49.721 clat percentiles (usec): 00:13:49.721 | 1.00th=[ 83], 5.00th=[ 87], 10.00th=[ 90], 20.00th=[ 96], 00:13:49.721 | 30.00th=[ 105], 40.00th=[ 121], 50.00th=[ 126], 60.00th=[ 129], 00:13:49.721 | 70.00th=[ 133], 80.00th=[ 137], 90.00th=[ 143], 95.00th=[ 155], 00:13:49.721 | 99.00th=[ 172], 99.50th=[ 176], 99.90th=[ 184], 99.95th=[ 192], 00:13:49.721 | 99.99th=[ 198] 00:13:49.721 write: IOPS=4091, BW=16.0MiB/s (16.8MB/s)(16.0MiB/1001msec); 0 zone resets 00:13:49.721 slat (nsec): min=8153, max=40517, avg=9378.41, stdev=1241.13 00:13:49.721 clat (usec): min=73, max=189, avg=113.41, stdev=22.07 00:13:49.721 lat (usec): min=82, max=198, avg=122.78, stdev=22.20 00:13:49.721 clat percentiles (usec): 00:13:49.721 | 1.00th=[ 79], 5.00th=[ 83], 10.00th=[ 86], 20.00th=[ 90], 00:13:49.721 | 30.00th=[ 95], 40.00th=[ 108], 50.00th=[ 119], 60.00th=[ 124], 00:13:49.721 | 70.00th=[ 128], 80.00th=[ 133], 90.00th=[ 137], 95.00th=[ 147], 00:13:49.721 | 99.00th=[ 169], 99.50th=[ 176], 99.90th=[ 186], 99.95th=[ 188], 00:13:49.721 | 99.99th=[ 190] 00:13:49.721 bw ( KiB/s): min=17704, max=17704, per=23.41%, avg=17704.00, stdev= 0.00, samples=1 00:13:49.721 iops : min= 4426, max= 4426, avg=4426.00, stdev= 0.00, samples=1 00:13:49.721 lat (usec) : 100=31.58%, 250=68.42% 00:13:49.721 cpu : usr=4.50%, sys=8.80%, ctx=7855, majf=0, minf=1 00:13:49.721 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:49.721 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:49.721 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:49.721 issued rwts: total=3759,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:49.721 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:49.721 job3: (groupid=0, jobs=1): err= 0: pid=3028278: Thu May 16 20:23:02 2024 00:13:49.721 read: IOPS=5114, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1001msec) 00:13:49.721 slat (nsec): min=6282, max=26905, avg=7206.46, stdev=719.64 00:13:49.721 clat (usec): min=73, max=130, avg=89.22, stdev= 6.87 00:13:49.721 lat (usec): min=80, max=137, avg=96.43, stdev= 6.91 00:13:49.721 clat percentiles (usec): 00:13:49.721 | 1.00th=[ 77], 5.00th=[ 80], 10.00th=[ 82], 20.00th=[ 84], 00:13:49.721 | 30.00th=[ 86], 40.00th=[ 87], 50.00th=[ 89], 60.00th=[ 90], 00:13:49.721 | 70.00th=[ 92], 80.00th=[ 95], 90.00th=[ 98], 95.00th=[ 102], 00:13:49.721 | 99.00th=[ 111], 99.50th=[ 113], 99.90th=[ 120], 99.95th=[ 126], 00:13:49.721 | 99.99th=[ 131] 00:13:49.721 write: IOPS=5126, BW=20.0MiB/s (21.0MB/s)(20.0MiB/1001msec); 0 zone resets 00:13:49.721 slat (nsec): min=8134, max=35536, avg=8994.07, stdev=886.43 00:13:49.721 clat (usec): min=69, max=128, avg=85.78, stdev= 6.89 00:13:49.721 lat (usec): min=78, max=138, avg=94.77, stdev= 6.98 00:13:49.721 clat percentiles (usec): 00:13:49.721 | 1.00th=[ 75], 5.00th=[ 77], 10.00th=[ 79], 20.00th=[ 81], 00:13:49.721 | 30.00th=[ 82], 40.00th=[ 84], 50.00th=[ 85], 60.00th=[ 87], 00:13:49.721 | 70.00th=[ 89], 80.00th=[ 91], 90.00th=[ 95], 95.00th=[ 99], 00:13:49.721 | 99.00th=[ 108], 99.50th=[ 111], 99.90th=[ 119], 99.95th=[ 121], 00:13:49.721 | 99.99th=[ 129] 00:13:49.722 bw ( KiB/s): min=20480, max=20480, per=27.08%, avg=20480.00, stdev= 0.00, samples=1 00:13:49.722 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=1 00:13:49.722 lat (usec) : 100=94.48%, 250=5.52% 00:13:49.722 cpu : usr=6.30%, sys=10.50%, ctx=10252, majf=0, minf=2 00:13:49.722 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:49.722 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:49.722 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:49.722 issued rwts: total=5120,5132,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:49.722 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:49.722 00:13:49.722 Run status group 0 (all jobs): 00:13:49.722 READ: bw=70.6MiB/s (74.0MB/s), 14.7MiB/s-20.0MiB/s (15.4MB/s-20.9MB/s), io=70.7MiB (74.1MB), run=1001-1001msec 00:13:49.722 WRITE: bw=73.8MiB/s (77.4MB/s), 16.0MiB/s-21.7MiB/s (16.8MB/s-22.8MB/s), io=73.9MiB (77.5MB), run=1001-1001msec 00:13:49.722 00:13:49.722 Disk stats (read/write): 00:13:49.722 nvme0n1: ios=4569/4608, merge=0/0, ticks=371/337, in_queue=708, util=86.27% 00:13:49.722 nvme0n2: ios=3589/3599, merge=0/0, ticks=385/360, in_queue=745, util=86.82% 00:13:49.722 nvme0n3: ios=3235/3584, merge=0/0, ticks=351/378, in_queue=729, util=89.09% 00:13:49.722 nvme0n4: ios=4151/4608, merge=0/0, ticks=340/368, in_queue=708, util=89.74% 00:13:49.722 20:23:02 nvmf_rdma.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:13:49.722 [global] 00:13:49.722 thread=1 00:13:49.722 invalidate=1 00:13:49.722 rw=write 00:13:49.722 time_based=1 00:13:49.722 runtime=1 00:13:49.722 ioengine=libaio 00:13:49.722 direct=1 00:13:49.722 bs=4096 00:13:49.722 iodepth=128 00:13:49.722 norandommap=0 00:13:49.722 numjobs=1 00:13:49.722 00:13:49.722 verify_dump=1 00:13:49.722 verify_backlog=512 00:13:49.722 verify_state_save=0 00:13:49.722 do_verify=1 00:13:49.722 verify=crc32c-intel 00:13:49.722 [job0] 00:13:49.722 filename=/dev/nvme0n1 00:13:49.722 [job1] 00:13:49.722 filename=/dev/nvme0n2 00:13:49.722 [job2] 00:13:49.722 filename=/dev/nvme0n3 00:13:49.722 [job3] 00:13:49.722 filename=/dev/nvme0n4 00:13:49.722 Could not set queue depth (nvme0n1) 00:13:49.722 Could not set queue depth (nvme0n2) 00:13:49.722 Could not set queue depth (nvme0n3) 00:13:49.722 Could not set queue depth (nvme0n4) 00:13:49.981 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:49.981 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:49.981 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:49.981 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:49.981 fio-3.35 00:13:49.981 Starting 4 threads 00:13:51.356 00:13:51.356 job0: (groupid=0, jobs=1): err= 0: pid=3028700: Thu May 16 20:23:04 2024 00:13:51.356 read: IOPS=11.2k, BW=43.8MiB/s (45.9MB/s)(44.0MiB/1005msec) 00:13:51.356 slat (nsec): min=1470, max=1433.9k, avg=42571.15, stdev=152960.83 00:13:51.356 clat (usec): min=4591, max=7045, avg=5716.83, stdev=271.16 00:13:51.356 lat (usec): min=4777, max=7055, avg=5759.40, stdev=268.25 00:13:51.356 clat percentiles (usec): 00:13:51.356 | 1.00th=[ 4948], 5.00th=[ 5211], 10.00th=[ 5407], 20.00th=[ 5538], 00:13:51.356 | 30.00th=[ 5604], 40.00th=[ 5669], 50.00th=[ 5735], 60.00th=[ 5800], 00:13:51.356 | 70.00th=[ 5866], 80.00th=[ 5932], 90.00th=[ 6063], 95.00th=[ 6128], 00:13:51.356 | 99.00th=[ 6390], 99.50th=[ 6456], 99.90th=[ 6652], 99.95th=[ 6849], 00:13:51.356 | 99.99th=[ 6980] 00:13:51.356 write: IOPS=11.6k, BW=45.1MiB/s (47.3MB/s)(45.4MiB/1005msec); 0 zone resets 00:13:51.356 slat (nsec): min=1993, max=1539.2k, avg=41426.69, stdev=145709.29 00:13:51.356 clat (usec): min=2333, max=9858, avg=5440.34, stdev=429.56 00:13:51.356 lat (usec): min=2345, max=9861, avg=5481.77, stdev=429.62 00:13:51.356 clat percentiles (usec): 00:13:51.356 | 1.00th=[ 4293], 5.00th=[ 4883], 10.00th=[ 5080], 20.00th=[ 5276], 00:13:51.356 | 30.00th=[ 5342], 40.00th=[ 5407], 50.00th=[ 5473], 60.00th=[ 5538], 00:13:51.356 | 70.00th=[ 5538], 80.00th=[ 5604], 90.00th=[ 5735], 95.00th=[ 5866], 00:13:51.356 | 99.00th=[ 6325], 99.50th=[ 7767], 99.90th=[ 9241], 99.95th=[ 9896], 00:13:51.356 | 99.99th=[ 9896] 00:13:51.356 bw ( KiB/s): min=45880, max=46000, per=51.71%, avg=45940.00, stdev=84.85, samples=2 00:13:51.356 iops : min=11470, max=11500, avg=11485.00, stdev=21.21, samples=2 00:13:51.356 lat (msec) : 4=0.30%, 10=99.70% 00:13:51.356 cpu : usr=5.68%, sys=9.46%, ctx=1475, majf=0, minf=1 00:13:51.356 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:13:51.356 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:51.356 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:51.356 issued rwts: total=11264,11612,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:51.356 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:51.356 job1: (groupid=0, jobs=1): err= 0: pid=3028703: Thu May 16 20:23:04 2024 00:13:51.356 read: IOPS=4087, BW=16.0MiB/s (16.7MB/s)(16.1MiB/1007msec) 00:13:51.356 slat (nsec): min=1395, max=5242.5k, avg=114026.94, stdev=555964.18 00:13:51.356 clat (usec): min=2668, max=20123, avg=14771.05, stdev=1088.14 00:13:51.356 lat (usec): min=7658, max=20130, avg=14885.08, stdev=1187.99 00:13:51.356 clat percentiles (usec): 00:13:51.356 | 1.00th=[13698], 5.00th=[13960], 10.00th=[14091], 20.00th=[14222], 00:13:51.356 | 30.00th=[14353], 40.00th=[14484], 50.00th=[14615], 60.00th=[14746], 00:13:51.356 | 70.00th=[14877], 80.00th=[15139], 90.00th=[15270], 95.00th=[16909], 00:13:51.356 | 99.00th=[19268], 99.50th=[19530], 99.90th=[20055], 99.95th=[20055], 00:13:51.356 | 99.99th=[20055] 00:13:51.356 write: IOPS=4575, BW=17.9MiB/s (18.7MB/s)(18.0MiB/1007msec); 0 zone resets 00:13:51.356 slat (nsec): min=1907, max=5194.4k, avg=111873.93, stdev=550530.13 00:13:51.356 clat (usec): min=8438, max=21767, avg=14451.23, stdev=1209.81 00:13:51.356 lat (usec): min=8442, max=21771, avg=14563.10, stdev=1305.01 00:13:51.356 clat percentiles (usec): 00:13:51.356 | 1.00th=[ 9765], 5.00th=[13566], 10.00th=[13698], 20.00th=[13960], 00:13:51.356 | 30.00th=[14091], 40.00th=[14222], 50.00th=[14222], 60.00th=[14353], 00:13:51.356 | 70.00th=[14615], 80.00th=[14746], 90.00th=[15139], 95.00th=[16712], 00:13:51.356 | 99.00th=[19006], 99.50th=[19268], 99.90th=[21627], 99.95th=[21890], 00:13:51.356 | 99.99th=[21890] 00:13:51.356 bw ( KiB/s): min=17688, max=18312, per=20.26%, avg=18000.00, stdev=441.23, samples=2 00:13:51.357 iops : min= 4422, max= 4578, avg=4500.00, stdev=110.31, samples=2 00:13:51.357 lat (msec) : 4=0.01%, 10=0.88%, 20=99.00%, 50=0.10% 00:13:51.357 cpu : usr=2.78%, sys=4.17%, ctx=663, majf=0, minf=1 00:13:51.357 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:13:51.357 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:51.357 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:51.357 issued rwts: total=4116,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:51.357 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:51.357 job2: (groupid=0, jobs=1): err= 0: pid=3028704: Thu May 16 20:23:04 2024 00:13:51.357 read: IOPS=2783, BW=10.9MiB/s (11.4MB/s)(10.9MiB/1006msec) 00:13:51.357 slat (nsec): min=1484, max=4123.4k, avg=172162.44, stdev=533054.38 00:13:51.357 clat (usec): min=5756, max=26922, avg=21859.37, stdev=1777.36 00:13:51.357 lat (usec): min=6488, max=27668, avg=22031.54, stdev=1820.89 00:13:51.357 clat percentiles (usec): 00:13:51.357 | 1.00th=[11076], 5.00th=[20841], 10.00th=[21103], 20.00th=[21365], 00:13:51.357 | 30.00th=[21627], 40.00th=[21890], 50.00th=[21890], 60.00th=[22152], 00:13:51.357 | 70.00th=[22414], 80.00th=[22676], 90.00th=[22676], 95.00th=[23725], 00:13:51.357 | 99.00th=[26084], 99.50th=[26084], 99.90th=[26870], 99.95th=[26870], 00:13:51.357 | 99.99th=[26870] 00:13:51.357 write: IOPS=3053, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1006msec); 0 zone resets 00:13:51.357 slat (usec): min=2, max=4053, avg=164.23, stdev=507.64 00:13:51.357 clat (usec): min=14760, max=25729, avg=21443.11, stdev=940.61 00:13:51.357 lat (usec): min=14768, max=25736, avg=21607.33, stdev=1027.10 00:13:51.357 clat percentiles (usec): 00:13:51.357 | 1.00th=[19530], 5.00th=[20317], 10.00th=[20579], 20.00th=[20841], 00:13:51.357 | 30.00th=[21103], 40.00th=[21103], 50.00th=[21365], 60.00th=[21627], 00:13:51.357 | 70.00th=[21627], 80.00th=[21890], 90.00th=[22414], 95.00th=[22676], 00:13:51.357 | 99.00th=[24773], 99.50th=[25035], 99.90th=[25560], 99.95th=[25822], 00:13:51.357 | 99.99th=[25822] 00:13:51.357 bw ( KiB/s): min=12288, max=12288, per=13.83%, avg=12288.00, stdev= 0.00, samples=2 00:13:51.357 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=2 00:13:51.357 lat (msec) : 10=0.19%, 20=2.01%, 50=97.80% 00:13:51.357 cpu : usr=2.09%, sys=3.58%, ctx=915, majf=0, minf=1 00:13:51.357 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:13:51.357 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:51.357 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:51.357 issued rwts: total=2800,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:51.357 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:51.357 job3: (groupid=0, jobs=1): err= 0: pid=3028705: Thu May 16 20:23:04 2024 00:13:51.357 read: IOPS=2784, BW=10.9MiB/s (11.4MB/s)(10.9MiB/1006msec) 00:13:51.357 slat (nsec): min=1536, max=4139.3k, avg=172902.51, stdev=531447.74 00:13:51.357 clat (usec): min=5753, max=26917, avg=21847.82, stdev=1854.87 00:13:51.357 lat (usec): min=6458, max=27660, avg=22020.72, stdev=1895.38 00:13:51.357 clat percentiles (usec): 00:13:51.357 | 1.00th=[11994], 5.00th=[20841], 10.00th=[21103], 20.00th=[21365], 00:13:51.357 | 30.00th=[21627], 40.00th=[21890], 50.00th=[21890], 60.00th=[22152], 00:13:51.357 | 70.00th=[22152], 80.00th=[22676], 90.00th=[22676], 95.00th=[23987], 00:13:51.357 | 99.00th=[25822], 99.50th=[26084], 99.90th=[26870], 99.95th=[26870], 00:13:51.357 | 99.99th=[26870] 00:13:51.357 write: IOPS=3053, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1006msec); 0 zone resets 00:13:51.357 slat (usec): min=2, max=4027, avg=163.36, stdev=507.73 00:13:51.357 clat (usec): min=14758, max=25761, avg=21438.77, stdev=936.56 00:13:51.357 lat (usec): min=14765, max=25768, avg=21602.13, stdev=1030.36 00:13:51.357 clat percentiles (usec): 00:13:51.357 | 1.00th=[19530], 5.00th=[20317], 10.00th=[20579], 20.00th=[20841], 00:13:51.357 | 30.00th=[21103], 40.00th=[21103], 50.00th=[21365], 60.00th=[21627], 00:13:51.357 | 70.00th=[21627], 80.00th=[21890], 90.00th=[22152], 95.00th=[22676], 00:13:51.357 | 99.00th=[24773], 99.50th=[25297], 99.90th=[25822], 99.95th=[25822], 00:13:51.357 | 99.99th=[25822] 00:13:51.357 bw ( KiB/s): min=12288, max=12288, per=13.83%, avg=12288.00, stdev= 0.00, samples=2 00:13:51.357 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=2 00:13:51.357 lat (msec) : 10=0.20%, 20=1.92%, 50=97.87% 00:13:51.357 cpu : usr=2.39%, sys=3.38%, ctx=919, majf=0, minf=1 00:13:51.357 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:13:51.357 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:51.357 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:51.357 issued rwts: total=2801,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:51.357 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:51.357 00:13:51.357 Run status group 0 (all jobs): 00:13:51.357 READ: bw=81.4MiB/s (85.3MB/s), 10.9MiB/s-43.8MiB/s (11.4MB/s-45.9MB/s), io=82.0MiB (85.9MB), run=1005-1007msec 00:13:51.357 WRITE: bw=86.8MiB/s (91.0MB/s), 11.9MiB/s-45.1MiB/s (12.5MB/s-47.3MB/s), io=87.4MiB (91.6MB), run=1005-1007msec 00:13:51.357 00:13:51.357 Disk stats (read/write): 00:13:51.357 nvme0n1: ios=9778/9823, merge=0/0, ticks=54295/52093, in_queue=106388, util=86.97% 00:13:51.357 nvme0n2: ios=3584/3859, merge=0/0, ticks=25776/26947, in_queue=52723, util=87.32% 00:13:51.357 nvme0n3: ios=2449/2560, merge=0/0, ticks=17750/17797, in_queue=35547, util=89.22% 00:13:51.357 nvme0n4: ios=2445/2560, merge=0/0, ticks=17735/17803, in_queue=35538, util=89.78% 00:13:51.357 20:23:04 nvmf_rdma.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:13:51.357 [global] 00:13:51.357 thread=1 00:13:51.357 invalidate=1 00:13:51.357 rw=randwrite 00:13:51.357 time_based=1 00:13:51.357 runtime=1 00:13:51.357 ioengine=libaio 00:13:51.357 direct=1 00:13:51.357 bs=4096 00:13:51.357 iodepth=128 00:13:51.357 norandommap=0 00:13:51.357 numjobs=1 00:13:51.357 00:13:51.357 verify_dump=1 00:13:51.357 verify_backlog=512 00:13:51.357 verify_state_save=0 00:13:51.357 do_verify=1 00:13:51.357 verify=crc32c-intel 00:13:51.357 [job0] 00:13:51.357 filename=/dev/nvme0n1 00:13:51.357 [job1] 00:13:51.357 filename=/dev/nvme0n2 00:13:51.357 [job2] 00:13:51.357 filename=/dev/nvme0n3 00:13:51.357 [job3] 00:13:51.357 filename=/dev/nvme0n4 00:13:51.357 Could not set queue depth (nvme0n1) 00:13:51.357 Could not set queue depth (nvme0n2) 00:13:51.357 Could not set queue depth (nvme0n3) 00:13:51.357 Could not set queue depth (nvme0n4) 00:13:51.615 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:51.615 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:51.615 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:51.615 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:51.615 fio-3.35 00:13:51.615 Starting 4 threads 00:13:52.991 00:13:52.991 job0: (groupid=0, jobs=1): err= 0: pid=3029072: Thu May 16 20:23:05 2024 00:13:52.991 read: IOPS=9170, BW=35.8MiB/s (37.6MB/s)(36.0MiB/1005msec) 00:13:52.991 slat (nsec): min=1434, max=1477.1k, avg=53347.18, stdev=194883.05 00:13:52.991 clat (usec): min=5240, max=12106, avg=7073.40, stdev=540.40 00:13:52.991 lat (usec): min=5246, max=12109, avg=7126.75, stdev=544.86 00:13:52.991 clat percentiles (usec): 00:13:52.991 | 1.00th=[ 5866], 5.00th=[ 6194], 10.00th=[ 6390], 20.00th=[ 6587], 00:13:52.991 | 30.00th=[ 6783], 40.00th=[ 6980], 50.00th=[ 7177], 60.00th=[ 7308], 00:13:52.991 | 70.00th=[ 7373], 80.00th=[ 7504], 90.00th=[ 7635], 95.00th=[ 7767], 00:13:52.992 | 99.00th=[ 8029], 99.50th=[ 8291], 99.90th=[12125], 99.95th=[12125], 00:13:52.992 | 99.99th=[12125] 00:13:52.992 write: IOPS=9275, BW=36.2MiB/s (38.0MB/s)(36.4MiB/1005msec); 0 zone resets 00:13:52.992 slat (nsec): min=1881, max=1489.2k, avg=51159.04, stdev=184159.92 00:13:52.992 clat (usec): min=1624, max=11110, avg=6684.56, stdev=626.95 00:13:52.992 lat (usec): min=1633, max=11114, avg=6735.72, stdev=633.78 00:13:52.992 clat percentiles (usec): 00:13:52.992 | 1.00th=[ 5080], 5.00th=[ 5800], 10.00th=[ 6063], 20.00th=[ 6259], 00:13:52.992 | 30.00th=[ 6390], 40.00th=[ 6587], 50.00th=[ 6783], 60.00th=[ 6980], 00:13:52.992 | 70.00th=[ 7046], 80.00th=[ 7111], 90.00th=[ 7242], 95.00th=[ 7439], 00:13:52.992 | 99.00th=[ 7832], 99.50th=[ 8160], 99.90th=[10159], 99.95th=[11076], 00:13:52.992 | 99.99th=[11076] 00:13:52.992 bw ( KiB/s): min=33728, max=40000, per=39.76%, avg=36864.00, stdev=4434.97, samples=2 00:13:52.992 iops : min= 8432, max=10000, avg=9216.00, stdev=1108.74, samples=2 00:13:52.992 lat (msec) : 2=0.05%, 4=0.17%, 10=99.49%, 20=0.28% 00:13:52.992 cpu : usr=4.58%, sys=7.17%, ctx=1202, majf=0, minf=1 00:13:52.992 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:13:52.992 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:52.992 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:52.992 issued rwts: total=9216,9322,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:52.992 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:52.992 job1: (groupid=0, jobs=1): err= 0: pid=3029073: Thu May 16 20:23:05 2024 00:13:52.992 read: IOPS=3059, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1004msec) 00:13:52.992 slat (nsec): min=1539, max=3861.2k, avg=153220.81, stdev=474138.96 00:13:52.992 clat (usec): min=14633, max=26288, avg=19667.70, stdev=2267.51 00:13:52.992 lat (usec): min=16400, max=26317, avg=19820.92, stdev=2285.30 00:13:52.992 clat percentiles (usec): 00:13:52.992 | 1.00th=[16712], 5.00th=[17171], 10.00th=[17433], 20.00th=[17957], 00:13:52.992 | 30.00th=[17957], 40.00th=[18220], 50.00th=[18744], 60.00th=[19006], 00:13:52.992 | 70.00th=[21627], 80.00th=[22414], 90.00th=[23200], 95.00th=[23462], 00:13:52.992 | 99.00th=[24511], 99.50th=[25035], 99.90th=[25560], 99.95th=[25822], 00:13:52.992 | 99.99th=[26346] 00:13:52.992 write: IOPS=3397, BW=13.3MiB/s (13.9MB/s)(13.3MiB/1004msec); 0 zone resets 00:13:52.992 slat (nsec): min=1924, max=4025.4k, avg=151166.80, stdev=449846.40 00:13:52.992 clat (usec): min=3043, max=26853, avg=19400.08, stdev=2980.87 00:13:52.992 lat (usec): min=3822, max=26867, avg=19551.24, stdev=2989.58 00:13:52.992 clat percentiles (usec): 00:13:52.992 | 1.00th=[ 7701], 5.00th=[16712], 10.00th=[17433], 20.00th=[17695], 00:13:52.992 | 30.00th=[17957], 40.00th=[18220], 50.00th=[18482], 60.00th=[19268], 00:13:52.992 | 70.00th=[21627], 80.00th=[22414], 90.00th=[23200], 95.00th=[23462], 00:13:52.992 | 99.00th=[24249], 99.50th=[24511], 99.90th=[26608], 99.95th=[26870], 00:13:52.992 | 99.99th=[26870] 00:13:52.992 bw ( KiB/s): min=12288, max=13984, per=14.17%, avg=13136.00, stdev=1199.25, samples=2 00:13:52.992 iops : min= 3072, max= 3496, avg=3284.00, stdev=299.81, samples=2 00:13:52.992 lat (msec) : 4=0.26%, 10=0.62%, 20=64.20%, 50=34.92% 00:13:52.992 cpu : usr=1.99%, sys=3.29%, ctx=829, majf=0, minf=1 00:13:52.992 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:13:52.992 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:52.992 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:52.992 issued rwts: total=3072,3411,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:52.992 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:52.992 job2: (groupid=0, jobs=1): err= 0: pid=3029074: Thu May 16 20:23:05 2024 00:13:52.992 read: IOPS=5980, BW=23.4MiB/s (24.5MB/s)(23.4MiB/1001msec) 00:13:52.992 slat (nsec): min=1406, max=1959.0k, avg=82825.34, stdev=258247.73 00:13:52.992 clat (usec): min=514, max=18973, avg=10551.73, stdev=3853.80 00:13:52.992 lat (usec): min=1559, max=18980, avg=10634.55, stdev=3885.37 00:13:52.992 clat percentiles (usec): 00:13:52.992 | 1.00th=[ 4752], 5.00th=[ 7767], 10.00th=[ 8029], 20.00th=[ 8225], 00:13:52.992 | 30.00th=[ 8356], 40.00th=[ 8455], 50.00th=[ 8586], 60.00th=[ 8848], 00:13:52.992 | 70.00th=[ 9372], 80.00th=[16581], 90.00th=[17433], 95.00th=[17957], 00:13:52.992 | 99.00th=[18482], 99.50th=[18482], 99.90th=[19006], 99.95th=[19006], 00:13:52.992 | 99.99th=[19006] 00:13:52.992 write: IOPS=6137, BW=24.0MiB/s (25.1MB/s)(24.0MiB/1001msec); 0 zone resets 00:13:52.992 slat (nsec): min=1939, max=2761.4k, avg=78685.77, stdev=248122.70 00:13:52.992 clat (usec): min=6970, max=18407, avg=10303.86, stdev=3661.26 00:13:52.992 lat (usec): min=6973, max=18637, avg=10382.55, stdev=3692.51 00:13:52.992 clat percentiles (usec): 00:13:52.992 | 1.00th=[ 7373], 5.00th=[ 7635], 10.00th=[ 7832], 20.00th=[ 7898], 00:13:52.992 | 30.00th=[ 8029], 40.00th=[ 8094], 50.00th=[ 8291], 60.00th=[ 8586], 00:13:52.992 | 70.00th=[ 9110], 80.00th=[16188], 90.00th=[16712], 95.00th=[16909], 00:13:52.992 | 99.00th=[17433], 99.50th=[17695], 99.90th=[17957], 99.95th=[17957], 00:13:52.992 | 99.99th=[18482] 00:13:52.992 bw ( KiB/s): min=18976, max=18976, per=20.47%, avg=18976.00, stdev= 0.00, samples=1 00:13:52.992 iops : min= 4744, max= 4744, avg=4744.00, stdev= 0.00, samples=1 00:13:52.992 lat (usec) : 750=0.01% 00:13:52.992 lat (msec) : 2=0.12%, 4=0.26%, 10=74.62%, 20=24.98% 00:13:52.992 cpu : usr=3.20%, sys=4.60%, ctx=1239, majf=0, minf=1 00:13:52.992 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:13:52.992 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:52.992 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:52.992 issued rwts: total=5986,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:52.992 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:52.992 job3: (groupid=0, jobs=1): err= 0: pid=3029075: Thu May 16 20:23:05 2024 00:13:52.992 read: IOPS=4079, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1004msec) 00:13:52.992 slat (nsec): min=1504, max=2872.2k, avg=117551.77, stdev=326115.35 00:13:52.992 clat (usec): min=12465, max=19343, avg=15250.10, stdev=1709.85 00:13:52.992 lat (usec): min=12533, max=19349, avg=15367.65, stdev=1727.61 00:13:52.992 clat percentiles (usec): 00:13:52.992 | 1.00th=[12780], 5.00th=[13304], 10.00th=[13566], 20.00th=[13698], 00:13:52.992 | 30.00th=[13960], 40.00th=[14091], 50.00th=[14353], 60.00th=[15270], 00:13:52.992 | 70.00th=[16712], 80.00th=[17171], 90.00th=[17695], 95.00th=[17957], 00:13:52.992 | 99.00th=[18482], 99.50th=[18744], 99.90th=[19006], 99.95th=[19006], 00:13:52.992 | 99.99th=[19268] 00:13:52.992 write: IOPS=4398, BW=17.2MiB/s (18.0MB/s)(17.2MiB/1004msec); 0 zone resets 00:13:52.992 slat (nsec): min=1978, max=3251.7k, avg=113596.83, stdev=323459.32 00:13:52.992 clat (usec): min=2968, max=18537, avg=14576.26, stdev=1829.89 00:13:52.992 lat (usec): min=4625, max=18546, avg=14689.86, stdev=1843.85 00:13:52.992 clat percentiles (usec): 00:13:52.992 | 1.00th=[ 8586], 5.00th=[12649], 10.00th=[12911], 20.00th=[13435], 00:13:52.992 | 30.00th=[13566], 40.00th=[13829], 50.00th=[13960], 60.00th=[14484], 00:13:52.992 | 70.00th=[16057], 80.00th=[16581], 90.00th=[16909], 95.00th=[17171], 00:13:52.992 | 99.00th=[17695], 99.50th=[17695], 99.90th=[18220], 99.95th=[18482], 00:13:52.992 | 99.99th=[18482] 00:13:52.992 bw ( KiB/s): min=16384, max=17928, per=18.51%, avg=17156.00, stdev=1091.77, samples=2 00:13:52.992 iops : min= 4096, max= 4482, avg=4289.00, stdev=272.94, samples=2 00:13:52.992 lat (msec) : 4=0.01%, 10=0.68%, 20=99.31% 00:13:52.992 cpu : usr=1.60%, sys=5.08%, ctx=1132, majf=0, minf=1 00:13:52.992 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:13:52.992 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:52.992 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:52.992 issued rwts: total=4096,4416,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:52.992 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:52.992 00:13:52.992 Run status group 0 (all jobs): 00:13:52.992 READ: bw=86.9MiB/s (91.2MB/s), 12.0MiB/s-35.8MiB/s (12.5MB/s-37.6MB/s), io=87.4MiB (91.6MB), run=1001-1005msec 00:13:52.992 WRITE: bw=90.5MiB/s (94.9MB/s), 13.3MiB/s-36.2MiB/s (13.9MB/s-38.0MB/s), io=91.0MiB (95.4MB), run=1001-1005msec 00:13:52.992 00:13:52.992 Disk stats (read/write): 00:13:52.992 nvme0n1: ios=7729/7810, merge=0/0, ticks=52776/50428, in_queue=103204, util=84.17% 00:13:52.992 nvme0n2: ios=2560/2707, merge=0/0, ticks=12741/13479, in_queue=26220, util=85.02% 00:13:52.992 nvme0n3: ios=4608/4968, merge=0/0, ticks=14964/15018, in_queue=29982, util=88.35% 00:13:52.992 nvme0n4: ios=3348/3584, merge=0/0, ticks=17046/17359, in_queue=34405, util=89.39% 00:13:52.992 20:23:05 nvmf_rdma.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:13:52.992 20:23:05 nvmf_rdma.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=3029287 00:13:52.992 20:23:05 nvmf_rdma.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:13:52.992 20:23:05 nvmf_rdma.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:13:52.992 [global] 00:13:52.992 thread=1 00:13:52.992 invalidate=1 00:13:52.992 rw=read 00:13:52.992 time_based=1 00:13:52.992 runtime=10 00:13:52.992 ioengine=libaio 00:13:52.992 direct=1 00:13:52.992 bs=4096 00:13:52.992 iodepth=1 00:13:52.992 norandommap=1 00:13:52.992 numjobs=1 00:13:52.992 00:13:52.992 [job0] 00:13:52.992 filename=/dev/nvme0n1 00:13:52.992 [job1] 00:13:52.992 filename=/dev/nvme0n2 00:13:52.992 [job2] 00:13:52.992 filename=/dev/nvme0n3 00:13:52.992 [job3] 00:13:52.992 filename=/dev/nvme0n4 00:13:52.992 Could not set queue depth (nvme0n1) 00:13:52.992 Could not set queue depth (nvme0n2) 00:13:52.992 Could not set queue depth (nvme0n3) 00:13:52.992 Could not set queue depth (nvme0n4) 00:13:52.992 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:52.992 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:52.992 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:52.992 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:52.992 fio-3.35 00:13:52.992 Starting 4 threads 00:13:56.278 20:23:08 nvmf_rdma.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:13:56.278 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=78274560, buflen=4096 00:13:56.278 fio: pid=3029452, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:13:56.278 20:23:08 nvmf_rdma.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:13:56.278 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=106016768, buflen=4096 00:13:56.278 fio: pid=3029451, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:13:56.278 20:23:08 nvmf_rdma.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:56.278 20:23:08 nvmf_rdma.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:13:56.278 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=3731456, buflen=4096 00:13:56.278 fio: pid=3029446, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:13:56.278 20:23:09 nvmf_rdma.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:56.278 20:23:09 nvmf_rdma.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:13:56.538 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=46727168, buflen=4096 00:13:56.538 fio: pid=3029448, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:13:56.538 20:23:09 nvmf_rdma.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:56.538 20:23:09 nvmf_rdma.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:13:56.538 00:13:56.538 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=3029446: Thu May 16 20:23:09 2024 00:13:56.538 read: IOPS=11.0k, BW=42.8MiB/s (44.9MB/s)(132MiB/3073msec) 00:13:56.538 slat (usec): min=4, max=14917, avg= 8.21, stdev=131.16 00:13:56.538 clat (usec): min=39, max=8838, avg=81.43, stdev=49.24 00:13:56.538 lat (usec): min=57, max=15005, avg=89.65, stdev=140.31 00:13:56.538 clat percentiles (usec): 00:13:56.538 | 1.00th=[ 65], 5.00th=[ 72], 10.00th=[ 73], 20.00th=[ 75], 00:13:56.538 | 30.00th=[ 77], 40.00th=[ 78], 50.00th=[ 79], 60.00th=[ 81], 00:13:56.538 | 70.00th=[ 83], 80.00th=[ 85], 90.00th=[ 90], 95.00th=[ 104], 00:13:56.538 | 99.00th=[ 133], 99.50th=[ 147], 99.90th=[ 172], 99.95th=[ 176], 00:13:56.538 | 99.99th=[ 184] 00:13:56.538 bw ( KiB/s): min=43440, max=46232, per=35.11%, avg=45440.00, stdev=1182.54, samples=5 00:13:56.538 iops : min=10860, max=11558, avg=11360.00, stdev=295.63, samples=5 00:13:56.538 lat (usec) : 50=0.01%, 100=94.67%, 250=5.32%, 500=0.01% 00:13:56.538 lat (msec) : 10=0.01% 00:13:56.538 cpu : usr=2.64%, sys=12.76%, ctx=33685, majf=0, minf=1 00:13:56.538 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:56.538 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:56.538 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:56.538 issued rwts: total=33680,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:56.538 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:56.538 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=3029448: Thu May 16 20:23:09 2024 00:13:56.538 read: IOPS=8447, BW=33.0MiB/s (34.6MB/s)(109MiB/3290msec) 00:13:56.538 slat (usec): min=2, max=14968, avg= 8.97, stdev=144.16 00:13:56.538 clat (usec): min=49, max=20535, avg=107.53, stdev=137.76 00:13:56.538 lat (usec): min=56, max=20542, avg=116.50, stdev=199.19 00:13:56.538 clat percentiles (usec): 00:13:56.538 | 1.00th=[ 55], 5.00th=[ 59], 10.00th=[ 67], 20.00th=[ 79], 00:13:56.538 | 30.00th=[ 83], 40.00th=[ 88], 50.00th=[ 93], 60.00th=[ 116], 00:13:56.538 | 70.00th=[ 131], 80.00th=[ 139], 90.00th=[ 149], 95.00th=[ 174], 00:13:56.538 | 99.00th=[ 192], 99.50th=[ 196], 99.90th=[ 217], 99.95th=[ 223], 00:13:56.538 | 99.99th=[ 1029] 00:13:56.538 bw ( KiB/s): min=27680, max=43048, per=25.33%, avg=32783.17, stdev=6272.36, samples=6 00:13:56.538 iops : min= 6920, max=10762, avg=8195.67, stdev=1568.08, samples=6 00:13:56.538 lat (usec) : 50=0.01%, 100=55.45%, 250=44.51%, 500=0.01% 00:13:56.538 lat (msec) : 2=0.01%, 10=0.01%, 50=0.01% 00:13:56.538 cpu : usr=2.89%, sys=9.36%, ctx=27801, majf=0, minf=1 00:13:56.538 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:56.538 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:56.538 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:56.538 issued rwts: total=27793,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:56.538 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:56.538 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=3029451: Thu May 16 20:23:09 2024 00:13:56.538 read: IOPS=8984, BW=35.1MiB/s (36.8MB/s)(101MiB/2881msec) 00:13:56.538 slat (usec): min=2, max=10779, avg= 7.77, stdev=82.87 00:13:56.538 clat (usec): min=59, max=26538, avg=102.14, stdev=166.15 00:13:56.538 lat (usec): min=62, max=26545, avg=109.91, stdev=185.77 00:13:56.538 clat percentiles (usec): 00:13:56.538 | 1.00th=[ 78], 5.00th=[ 81], 10.00th=[ 83], 20.00th=[ 86], 00:13:56.538 | 30.00th=[ 88], 40.00th=[ 90], 50.00th=[ 92], 60.00th=[ 94], 00:13:56.538 | 70.00th=[ 99], 80.00th=[ 116], 90.00th=[ 139], 95.00th=[ 155], 00:13:56.538 | 99.00th=[ 186], 99.50th=[ 192], 99.90th=[ 208], 99.95th=[ 217], 00:13:56.538 | 99.99th=[ 229] 00:13:56.538 bw ( KiB/s): min=27336, max=41216, per=28.98%, avg=37513.60, stdev=5801.35, samples=5 00:13:56.538 iops : min= 6834, max=10304, avg=9378.40, stdev=1450.34, samples=5 00:13:56.538 lat (usec) : 100=72.26%, 250=27.73%, 500=0.01% 00:13:56.538 lat (msec) : 50=0.01% 00:13:56.538 cpu : usr=2.53%, sys=10.45%, ctx=25889, majf=0, minf=1 00:13:56.538 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:56.538 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:56.538 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:56.538 issued rwts: total=25884,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:56.538 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:56.538 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=3029452: Thu May 16 20:23:09 2024 00:13:56.538 read: IOPS=7062, BW=27.6MiB/s (28.9MB/s)(74.6MiB/2706msec) 00:13:56.538 slat (nsec): min=5885, max=39436, avg=7548.74, stdev=1833.39 00:13:56.538 clat (usec): min=67, max=872, avg=131.18, stdev=24.12 00:13:56.538 lat (usec): min=75, max=879, avg=138.73, stdev=24.26 00:13:56.538 clat percentiles (usec): 00:13:56.538 | 1.00th=[ 85], 5.00th=[ 91], 10.00th=[ 95], 20.00th=[ 109], 00:13:56.538 | 30.00th=[ 126], 40.00th=[ 131], 50.00th=[ 135], 60.00th=[ 137], 00:13:56.538 | 70.00th=[ 141], 80.00th=[ 145], 90.00th=[ 161], 95.00th=[ 174], 00:13:56.538 | 99.00th=[ 188], 99.50th=[ 194], 99.90th=[ 212], 99.95th=[ 225], 00:13:56.538 | 99.99th=[ 265] 00:13:56.538 bw ( KiB/s): min=27336, max=33216, per=22.25%, avg=28798.40, stdev=2480.74, samples=5 00:13:56.538 iops : min= 6834, max= 8304, avg=7199.60, stdev=620.18, samples=5 00:13:56.538 lat (usec) : 100=15.55%, 250=84.44%, 500=0.01%, 1000=0.01% 00:13:56.538 cpu : usr=2.62%, sys=7.54%, ctx=19111, majf=0, minf=2 00:13:56.538 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:56.538 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:56.538 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:56.538 issued rwts: total=19111,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:56.538 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:56.538 00:13:56.538 Run status group 0 (all jobs): 00:13:56.538 READ: bw=126MiB/s (133MB/s), 27.6MiB/s-42.8MiB/s (28.9MB/s-44.9MB/s), io=416MiB (436MB), run=2706-3290msec 00:13:56.538 00:13:56.538 Disk stats (read/write): 00:13:56.538 nvme0n1: ios=31759/0, merge=0/0, ticks=2377/0, in_queue=2377, util=94.79% 00:13:56.538 nvme0n2: ios=25492/0, merge=0/0, ticks=2665/0, in_queue=2665, util=94.68% 00:13:56.538 nvme0n3: ios=25821/0, merge=0/0, ticks=2495/0, in_queue=2495, util=96.06% 00:13:56.538 nvme0n4: ios=18653/0, merge=0/0, ticks=2318/0, in_queue=2318, util=96.49% 00:13:56.797 20:23:09 nvmf_rdma.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:56.797 20:23:09 nvmf_rdma.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:13:57.056 20:23:09 nvmf_rdma.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:57.056 20:23:09 nvmf_rdma.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:13:57.056 20:23:10 nvmf_rdma.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:57.056 20:23:10 nvmf_rdma.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:13:57.315 20:23:10 nvmf_rdma.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:57.315 20:23:10 nvmf_rdma.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:13:57.573 20:23:10 nvmf_rdma.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:13:57.573 20:23:10 nvmf_rdma.nvmf_fio_target -- target/fio.sh@70 -- # wait 3029287 00:13:57.573 20:23:10 nvmf_rdma.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:13:57.573 20:23:10 nvmf_rdma.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:58.510 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:58.510 20:23:11 nvmf_rdma.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:58.510 20:23:11 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1215 -- # local i=0 00:13:58.510 20:23:11 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:13:58.510 20:23:11 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:58.510 20:23:11 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:13:58.510 20:23:11 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:58.510 20:23:11 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1227 -- # return 0 00:13:58.510 20:23:11 nvmf_rdma.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:13:58.510 20:23:11 nvmf_rdma.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:13:58.510 nvmf hotplug test: fio failed as expected 00:13:58.510 20:23:11 nvmf_rdma.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:58.769 20:23:11 nvmf_rdma.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:13:58.769 20:23:11 nvmf_rdma.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:13:58.769 20:23:11 nvmf_rdma.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:13:58.769 20:23:11 nvmf_rdma.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:13:58.769 20:23:11 nvmf_rdma.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:13:58.769 20:23:11 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:58.769 20:23:11 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:13:58.769 20:23:11 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:13:58.769 20:23:11 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:13:58.769 20:23:11 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:13:58.769 20:23:11 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:58.769 20:23:11 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:13:58.769 rmmod nvme_rdma 00:13:58.769 rmmod nvme_fabrics 00:13:58.769 20:23:11 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:58.769 20:23:11 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:13:58.769 20:23:11 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:13:58.769 20:23:11 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 3026374 ']' 00:13:58.769 20:23:11 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 3026374 00:13:58.769 20:23:11 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@946 -- # '[' -z 3026374 ']' 00:13:58.769 20:23:11 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@950 -- # kill -0 3026374 00:13:58.769 20:23:11 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@951 -- # uname 00:13:58.769 20:23:11 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:58.769 20:23:11 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3026374 00:13:58.769 20:23:11 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:13:58.769 20:23:11 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:13:58.769 20:23:11 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3026374' 00:13:58.769 killing process with pid 3026374 00:13:58.769 20:23:11 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@965 -- # kill 3026374 00:13:58.769 [2024-05-16 20:23:11.644777] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:13:58.769 20:23:11 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@970 -- # wait 3026374 00:13:58.769 [2024-05-16 20:23:11.728833] rdma.c:2885:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:13:59.029 20:23:11 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:59.029 20:23:11 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:13:59.029 00:13:59.029 real 0m25.811s 00:13:59.029 user 1m53.056s 00:13:59.029 sys 0m8.898s 00:13:59.029 20:23:11 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:59.029 20:23:11 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:13:59.029 ************************************ 00:13:59.029 END TEST nvmf_fio_target 00:13:59.029 ************************************ 00:13:59.029 20:23:11 nvmf_rdma -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=rdma 00:13:59.029 20:23:11 nvmf_rdma -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:13:59.029 20:23:11 nvmf_rdma -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:59.029 20:23:11 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:13:59.029 ************************************ 00:13:59.029 START TEST nvmf_bdevio 00:13:59.029 ************************************ 00:13:59.029 20:23:11 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=rdma 00:13:59.288 * Looking for test storage... 00:13:59.288 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:13:59.288 20:23:12 nvmf_rdma.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:13:59.288 20:23:12 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:13:59.288 20:23:12 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:59.288 20:23:12 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:59.288 20:23:12 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:59.288 20:23:12 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:59.288 20:23:12 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:59.288 20:23:12 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:59.288 20:23:12 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:59.288 20:23:12 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:59.288 20:23:12 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:59.288 20:23:12 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:59.288 20:23:12 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:13:59.288 20:23:12 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:13:59.288 20:23:12 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:59.288 20:23:12 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:59.288 20:23:12 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:59.288 20:23:12 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:59.288 20:23:12 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:13:59.288 20:23:12 nvmf_rdma.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:59.288 20:23:12 nvmf_rdma.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:59.288 20:23:12 nvmf_rdma.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:59.288 20:23:12 nvmf_rdma.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:59.288 20:23:12 nvmf_rdma.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:59.288 20:23:12 nvmf_rdma.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:59.288 20:23:12 nvmf_rdma.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:13:59.288 20:23:12 nvmf_rdma.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:59.288 20:23:12 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:13:59.288 20:23:12 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:59.288 20:23:12 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:59.288 20:23:12 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:59.288 20:23:12 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:59.288 20:23:12 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:59.288 20:23:12 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:59.288 20:23:12 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:59.288 20:23:12 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:59.288 20:23:12 nvmf_rdma.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:59.288 20:23:12 nvmf_rdma.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:59.288 20:23:12 nvmf_rdma.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:13:59.288 20:23:12 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:13:59.288 20:23:12 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:59.288 20:23:12 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:59.288 20:23:12 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:59.288 20:23:12 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:59.288 20:23:12 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:59.288 20:23:12 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:59.288 20:23:12 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:59.288 20:23:12 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:59.288 20:23:12 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:59.288 20:23:12 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@285 -- # xtrace_disable 00:13:59.288 20:23:12 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:05.857 20:23:18 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:05.857 20:23:18 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@291 -- # pci_devs=() 00:14:05.857 20:23:18 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:05.857 20:23:18 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:05.857 20:23:18 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:05.857 20:23:18 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:05.857 20:23:18 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:05.857 20:23:18 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@295 -- # net_devs=() 00:14:05.857 20:23:18 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:05.857 20:23:18 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@296 -- # e810=() 00:14:05.857 20:23:18 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@296 -- # local -ga e810 00:14:05.857 20:23:18 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@297 -- # x722=() 00:14:05.858 20:23:18 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@297 -- # local -ga x722 00:14:05.858 20:23:18 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@298 -- # mlx=() 00:14:05.858 20:23:18 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@298 -- # local -ga mlx 00:14:05.858 20:23:18 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:05.858 20:23:18 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:05.858 20:23:18 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:05.858 20:23:18 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:05.858 20:23:18 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:05.858 20:23:18 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:05.858 20:23:18 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:05.858 20:23:18 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:05.858 20:23:18 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:05.858 20:23:18 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:05.858 20:23:18 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:05.858 20:23:18 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:05.858 20:23:18 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:14:05.858 20:23:18 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:14:05.858 20:23:18 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:14:05.858 20:23:18 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:14:05.858 20:23:18 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:14:05.858 20:23:18 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:05.858 20:23:18 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:05.858 20:23:18 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:14:05.858 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:14:05.858 20:23:18 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:14:05.858 20:23:18 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:14:05.858 20:23:18 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:14:05.858 20:23:18 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:14:05.858 20:23:18 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:14:05.858 20:23:18 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:14:05.858 20:23:18 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:05.858 20:23:18 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:14:05.858 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:14:05.858 20:23:18 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:14:05.858 20:23:18 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:14:05.858 20:23:18 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:14:05.858 20:23:18 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:14:05.858 20:23:18 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:14:05.858 20:23:18 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:14:05.858 20:23:18 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:05.858 20:23:18 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:14:05.858 20:23:18 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:05.858 20:23:18 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:05.858 20:23:18 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:14:05.858 20:23:18 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:05.858 20:23:18 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:05.858 20:23:18 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:14:05.858 Found net devices under 0000:da:00.0: mlx_0_0 00:14:05.858 20:23:18 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:05.858 20:23:18 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:05.858 20:23:18 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:05.858 20:23:18 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:14:05.858 20:23:18 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:05.858 20:23:18 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:05.858 20:23:18 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:14:05.858 Found net devices under 0000:da:00.1: mlx_0_1 00:14:05.858 20:23:18 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:05.858 20:23:18 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:05.858 20:23:18 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@414 -- # is_hw=yes 00:14:05.858 20:23:18 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:05.858 20:23:18 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:14:05.858 20:23:18 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:14:05.858 20:23:18 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@420 -- # rdma_device_init 00:14:05.858 20:23:18 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:14:05.858 20:23:18 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@58 -- # uname 00:14:05.858 20:23:18 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:14:05.858 20:23:18 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@62 -- # modprobe ib_cm 00:14:05.858 20:23:18 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@63 -- # modprobe ib_core 00:14:05.858 20:23:18 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@64 -- # modprobe ib_umad 00:14:05.858 20:23:18 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:14:05.858 20:23:18 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@66 -- # modprobe iw_cm 00:14:05.858 20:23:18 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:14:05.858 20:23:18 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:14:05.858 20:23:18 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@502 -- # allocate_nic_ips 00:14:05.858 20:23:18 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:14:05.858 20:23:18 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@73 -- # get_rdma_if_list 00:14:05.858 20:23:18 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:14:05.858 20:23:18 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:14:05.858 20:23:18 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:14:05.858 20:23:18 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:14:05.858 20:23:18 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:14:05.858 20:23:18 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:05.858 20:23:18 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:05.858 20:23:18 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:14:05.858 20:23:18 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@104 -- # echo mlx_0_0 00:14:05.858 20:23:18 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@105 -- # continue 2 00:14:05.858 20:23:18 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:05.858 20:23:18 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:05.858 20:23:18 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:14:05.858 20:23:18 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:05.858 20:23:18 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:14:05.858 20:23:18 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@104 -- # echo mlx_0_1 00:14:05.858 20:23:18 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@105 -- # continue 2 00:14:05.858 20:23:18 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:14:05.858 20:23:18 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:14:05.858 20:23:18 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:14:05.858 20:23:18 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:14:05.858 20:23:18 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:05.858 20:23:18 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:05.858 20:23:18 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:14:05.858 20:23:18 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:14:05.858 20:23:18 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:14:05.858 258: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:14:05.858 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:14:05.858 altname enp218s0f0np0 00:14:05.858 altname ens818f0np0 00:14:05.858 inet 192.168.100.8/24 scope global mlx_0_0 00:14:05.858 valid_lft forever preferred_lft forever 00:14:05.858 20:23:18 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:14:05.858 20:23:18 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:14:05.858 20:23:18 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:14:05.858 20:23:18 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:14:05.858 20:23:18 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:05.858 20:23:18 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:05.858 20:23:18 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:14:05.858 20:23:18 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:14:05.858 20:23:18 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:14:05.858 259: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:14:05.858 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:14:05.858 altname enp218s0f1np1 00:14:05.858 altname ens818f1np1 00:14:05.858 inet 192.168.100.9/24 scope global mlx_0_1 00:14:05.858 valid_lft forever preferred_lft forever 00:14:05.858 20:23:18 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@422 -- # return 0 00:14:05.858 20:23:18 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:05.858 20:23:18 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:14:05.858 20:23:18 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:14:05.858 20:23:18 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:14:05.858 20:23:18 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@86 -- # get_rdma_if_list 00:14:05.858 20:23:18 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:14:05.858 20:23:18 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:14:05.858 20:23:18 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:14:05.858 20:23:18 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:14:05.858 20:23:18 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:14:05.858 20:23:18 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:05.858 20:23:18 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:05.859 20:23:18 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:14:05.859 20:23:18 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@104 -- # echo mlx_0_0 00:14:05.859 20:23:18 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@105 -- # continue 2 00:14:05.859 20:23:18 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:05.859 20:23:18 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:05.859 20:23:18 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:14:05.859 20:23:18 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:05.859 20:23:18 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:14:05.859 20:23:18 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@104 -- # echo mlx_0_1 00:14:05.859 20:23:18 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@105 -- # continue 2 00:14:05.859 20:23:18 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:14:05.859 20:23:18 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:14:05.859 20:23:18 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:14:05.859 20:23:18 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:14:05.859 20:23:18 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:05.859 20:23:18 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:05.859 20:23:18 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:14:05.859 20:23:18 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:14:05.859 20:23:18 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:14:05.859 20:23:18 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:14:05.859 20:23:18 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:05.859 20:23:18 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:05.859 20:23:18 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:14:05.859 192.168.100.9' 00:14:05.859 20:23:18 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:14:05.859 192.168.100.9' 00:14:05.859 20:23:18 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@457 -- # head -n 1 00:14:05.859 20:23:18 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:14:05.859 20:23:18 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:14:05.859 192.168.100.9' 00:14:05.859 20:23:18 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@458 -- # tail -n +2 00:14:05.859 20:23:18 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@458 -- # head -n 1 00:14:05.859 20:23:18 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:14:05.859 20:23:18 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:14:05.859 20:23:18 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:14:05.859 20:23:18 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:14:05.859 20:23:18 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:14:05.859 20:23:18 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:14:05.859 20:23:18 nvmf_rdma.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:14:05.859 20:23:18 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:05.859 20:23:18 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@720 -- # xtrace_disable 00:14:05.859 20:23:18 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:05.859 20:23:18 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=3033839 00:14:05.859 20:23:18 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 3033839 00:14:05.859 20:23:18 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:14:05.859 20:23:18 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@827 -- # '[' -z 3033839 ']' 00:14:05.859 20:23:18 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:05.859 20:23:18 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:05.859 20:23:18 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:05.859 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:05.859 20:23:18 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:05.859 20:23:18 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:05.859 [2024-05-16 20:23:18.250721] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:14:05.859 [2024-05-16 20:23:18.250766] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:05.859 EAL: No free 2048 kB hugepages reported on node 1 00:14:05.859 [2024-05-16 20:23:18.311922] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:05.859 [2024-05-16 20:23:18.391104] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:05.859 [2024-05-16 20:23:18.391140] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:05.859 [2024-05-16 20:23:18.391147] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:05.859 [2024-05-16 20:23:18.391153] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:05.859 [2024-05-16 20:23:18.391158] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:05.859 [2024-05-16 20:23:18.391219] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:14:05.859 [2024-05-16 20:23:18.391323] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:14:05.859 [2024-05-16 20:23:18.391446] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:05.859 [2024-05-16 20:23:18.391447] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:14:06.117 20:23:19 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:06.117 20:23:19 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@860 -- # return 0 00:14:06.117 20:23:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:06.117 20:23:19 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:06.117 20:23:19 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:06.117 20:23:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:06.117 20:23:19 nvmf_rdma.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:14:06.117 20:23:19 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:06.117 20:23:19 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:06.375 [2024-05-16 20:23:19.127041] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x15e6290/0x15ea780) succeed. 00:14:06.375 [2024-05-16 20:23:19.137253] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x15e78d0/0x162be10) succeed. 00:14:06.376 20:23:19 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:06.376 20:23:19 nvmf_rdma.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:06.376 20:23:19 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:06.376 20:23:19 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:06.376 Malloc0 00:14:06.376 20:23:19 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:06.376 20:23:19 nvmf_rdma.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:06.376 20:23:19 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:06.376 20:23:19 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:06.376 20:23:19 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:06.376 20:23:19 nvmf_rdma.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:06.376 20:23:19 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:06.376 20:23:19 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:06.376 20:23:19 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:06.376 20:23:19 nvmf_rdma.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:14:06.376 20:23:19 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:06.376 20:23:19 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:06.376 [2024-05-16 20:23:19.301757] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:14:06.376 [2024-05-16 20:23:19.302141] rdma.c:3032:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:14:06.376 20:23:19 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:06.376 20:23:19 nvmf_rdma.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:14:06.376 20:23:19 nvmf_rdma.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:14:06.376 20:23:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:14:06.376 20:23:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:14:06.376 20:23:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:14:06.376 20:23:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:14:06.376 { 00:14:06.376 "params": { 00:14:06.376 "name": "Nvme$subsystem", 00:14:06.376 "trtype": "$TEST_TRANSPORT", 00:14:06.376 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:06.376 "adrfam": "ipv4", 00:14:06.376 "trsvcid": "$NVMF_PORT", 00:14:06.376 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:06.376 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:06.376 "hdgst": ${hdgst:-false}, 00:14:06.376 "ddgst": ${ddgst:-false} 00:14:06.376 }, 00:14:06.376 "method": "bdev_nvme_attach_controller" 00:14:06.376 } 00:14:06.376 EOF 00:14:06.376 )") 00:14:06.376 20:23:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:14:06.376 20:23:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:14:06.376 20:23:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:14:06.376 20:23:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:14:06.376 "params": { 00:14:06.376 "name": "Nvme1", 00:14:06.376 "trtype": "rdma", 00:14:06.376 "traddr": "192.168.100.8", 00:14:06.376 "adrfam": "ipv4", 00:14:06.376 "trsvcid": "4420", 00:14:06.376 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:06.376 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:06.376 "hdgst": false, 00:14:06.376 "ddgst": false 00:14:06.376 }, 00:14:06.376 "method": "bdev_nvme_attach_controller" 00:14:06.376 }' 00:14:06.376 [2024-05-16 20:23:19.350346] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:14:06.376 [2024-05-16 20:23:19.350390] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3034006 ] 00:14:06.634 EAL: No free 2048 kB hugepages reported on node 1 00:14:06.634 [2024-05-16 20:23:19.410722] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:06.634 [2024-05-16 20:23:19.486357] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:06.634 [2024-05-16 20:23:19.486468] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:06.634 [2024-05-16 20:23:19.486471] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:06.893 I/O targets: 00:14:06.893 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:14:06.893 00:14:06.893 00:14:06.893 CUnit - A unit testing framework for C - Version 2.1-3 00:14:06.893 http://cunit.sourceforge.net/ 00:14:06.893 00:14:06.893 00:14:06.893 Suite: bdevio tests on: Nvme1n1 00:14:06.893 Test: blockdev write read block ...passed 00:14:06.893 Test: blockdev write zeroes read block ...passed 00:14:06.893 Test: blockdev write zeroes read no split ...passed 00:14:06.893 Test: blockdev write zeroes read split ...passed 00:14:06.893 Test: blockdev write zeroes read split partial ...passed 00:14:06.893 Test: blockdev reset ...[2024-05-16 20:23:19.687447] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:14:06.893 [2024-05-16 20:23:19.710147] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:14:06.893 [2024-05-16 20:23:19.736837] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:14:06.893 passed 00:14:06.893 Test: blockdev write read 8 blocks ...passed 00:14:06.893 Test: blockdev write read size > 128k ...passed 00:14:06.893 Test: blockdev write read invalid size ...passed 00:14:06.893 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:14:06.893 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:14:06.893 Test: blockdev write read max offset ...passed 00:14:06.893 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:14:06.893 Test: blockdev writev readv 8 blocks ...passed 00:14:06.893 Test: blockdev writev readv 30 x 1block ...passed 00:14:06.893 Test: blockdev writev readv block ...passed 00:14:06.893 Test: blockdev writev readv size > 128k ...passed 00:14:06.893 Test: blockdev writev readv size > 128k in two iovs ...passed 00:14:06.893 Test: blockdev comparev and writev ...[2024-05-16 20:23:19.739800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:06.893 [2024-05-16 20:23:19.739828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:06.893 [2024-05-16 20:23:19.739838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:06.893 [2024-05-16 20:23:19.739845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:14:06.893 [2024-05-16 20:23:19.739994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:06.893 [2024-05-16 20:23:19.740003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:14:06.893 [2024-05-16 20:23:19.740011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:06.893 [2024-05-16 20:23:19.740018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:14:06.893 [2024-05-16 20:23:19.740176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:06.893 [2024-05-16 20:23:19.740185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:14:06.893 [2024-05-16 20:23:19.740192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:06.893 [2024-05-16 20:23:19.740198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:14:06.893 [2024-05-16 20:23:19.740384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:06.893 [2024-05-16 20:23:19.740393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:14:06.893 [2024-05-16 20:23:19.740401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:06.893 [2024-05-16 20:23:19.740407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:14:06.893 passed 00:14:06.893 Test: blockdev nvme passthru rw ...passed 00:14:06.893 Test: blockdev nvme passthru vendor specific ...[2024-05-16 20:23:19.740669] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:14:06.893 [2024-05-16 20:23:19.740680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:14:06.893 [2024-05-16 20:23:19.740729] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:14:06.893 [2024-05-16 20:23:19.740738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:14:06.893 [2024-05-16 20:23:19.740778] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:14:06.893 [2024-05-16 20:23:19.740786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:14:06.893 [2024-05-16 20:23:19.740824] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:14:06.893 [2024-05-16 20:23:19.740832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:14:06.893 passed 00:14:06.893 Test: blockdev nvme admin passthru ...passed 00:14:06.893 Test: blockdev copy ...passed 00:14:06.893 00:14:06.893 Run Summary: Type Total Ran Passed Failed Inactive 00:14:06.893 suites 1 1 n/a 0 0 00:14:06.893 tests 23 23 23 0 0 00:14:06.893 asserts 152 152 152 0 n/a 00:14:06.893 00:14:06.893 Elapsed time = 0.171 seconds 00:14:07.151 20:23:19 nvmf_rdma.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:07.151 20:23:19 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:07.151 20:23:19 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:07.151 20:23:19 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:07.151 20:23:19 nvmf_rdma.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:14:07.151 20:23:19 nvmf_rdma.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:14:07.151 20:23:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:07.151 20:23:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:14:07.151 20:23:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:14:07.151 20:23:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:14:07.151 20:23:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:14:07.151 20:23:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:07.151 20:23:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:14:07.151 rmmod nvme_rdma 00:14:07.151 rmmod nvme_fabrics 00:14:07.151 20:23:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:07.151 20:23:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:14:07.151 20:23:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:14:07.151 20:23:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 3033839 ']' 00:14:07.151 20:23:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 3033839 00:14:07.151 20:23:19 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@946 -- # '[' -z 3033839 ']' 00:14:07.151 20:23:19 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@950 -- # kill -0 3033839 00:14:07.151 20:23:19 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@951 -- # uname 00:14:07.151 20:23:19 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:07.151 20:23:19 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3033839 00:14:07.151 20:23:20 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@952 -- # process_name=reactor_3 00:14:07.151 20:23:20 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@956 -- # '[' reactor_3 = sudo ']' 00:14:07.151 20:23:20 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3033839' 00:14:07.152 killing process with pid 3033839 00:14:07.152 20:23:20 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@965 -- # kill 3033839 00:14:07.152 [2024-05-16 20:23:20.029255] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:14:07.152 20:23:20 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@970 -- # wait 3033839 00:14:07.152 [2024-05-16 20:23:20.109249] rdma.c:2885:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:14:07.410 20:23:20 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:07.410 20:23:20 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:14:07.410 00:14:07.410 real 0m8.326s 00:14:07.410 user 0m10.389s 00:14:07.410 sys 0m5.158s 00:14:07.410 20:23:20 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:07.410 20:23:20 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:07.410 ************************************ 00:14:07.410 END TEST nvmf_bdevio 00:14:07.410 ************************************ 00:14:07.410 20:23:20 nvmf_rdma -- nvmf/nvmf.sh@57 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=rdma 00:14:07.410 20:23:20 nvmf_rdma -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:14:07.410 20:23:20 nvmf_rdma -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:07.410 20:23:20 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:14:07.410 ************************************ 00:14:07.410 START TEST nvmf_auth_target 00:14:07.410 ************************************ 00:14:07.410 20:23:20 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=rdma 00:14:07.669 * Looking for test storage... 00:14:07.669 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:14:07.669 20:23:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:14:07.669 20:23:20 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:14:07.669 20:23:20 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:07.669 20:23:20 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:07.669 20:23:20 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:07.669 20:23:20 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:07.669 20:23:20 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:07.669 20:23:20 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:07.669 20:23:20 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:07.669 20:23:20 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:07.669 20:23:20 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:07.669 20:23:20 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:07.669 20:23:20 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:14:07.669 20:23:20 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:14:07.669 20:23:20 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:07.669 20:23:20 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:07.669 20:23:20 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:07.669 20:23:20 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:07.669 20:23:20 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:14:07.669 20:23:20 nvmf_rdma.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:07.669 20:23:20 nvmf_rdma.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:07.669 20:23:20 nvmf_rdma.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:07.669 20:23:20 nvmf_rdma.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:07.669 20:23:20 nvmf_rdma.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:07.670 20:23:20 nvmf_rdma.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:07.670 20:23:20 nvmf_rdma.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:14:07.670 20:23:20 nvmf_rdma.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:07.670 20:23:20 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:14:07.670 20:23:20 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:07.670 20:23:20 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:07.670 20:23:20 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:07.670 20:23:20 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:07.670 20:23:20 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:07.670 20:23:20 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:07.670 20:23:20 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:07.670 20:23:20 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:07.670 20:23:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:14:07.670 20:23:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:14:07.670 20:23:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:14:07.670 20:23:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:14:07.670 20:23:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:14:07.670 20:23:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:14:07.670 20:23:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:14:07.670 20:23:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:14:07.670 20:23:20 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:14:07.670 20:23:20 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:07.670 20:23:20 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:07.670 20:23:20 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:07.670 20:23:20 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:07.670 20:23:20 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:07.670 20:23:20 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:07.670 20:23:20 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:07.670 20:23:20 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:07.670 20:23:20 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:07.670 20:23:20 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@285 -- # xtrace_disable 00:14:07.670 20:23:20 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:14.232 20:23:25 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:14.232 20:23:25 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@291 -- # pci_devs=() 00:14:14.232 20:23:25 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:14.232 20:23:25 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:14.232 20:23:25 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:14.232 20:23:25 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:14.232 20:23:25 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:14.232 20:23:25 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@295 -- # net_devs=() 00:14:14.232 20:23:25 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:14.232 20:23:25 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@296 -- # e810=() 00:14:14.232 20:23:25 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@296 -- # local -ga e810 00:14:14.232 20:23:25 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@297 -- # x722=() 00:14:14.232 20:23:25 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@297 -- # local -ga x722 00:14:14.232 20:23:25 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@298 -- # mlx=() 00:14:14.232 20:23:25 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@298 -- # local -ga mlx 00:14:14.232 20:23:25 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:14.232 20:23:25 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:14.232 20:23:25 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:14.232 20:23:25 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:14.232 20:23:25 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:14.232 20:23:25 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:14.232 20:23:25 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:14.232 20:23:25 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:14.232 20:23:25 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:14.232 20:23:25 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:14.232 20:23:25 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:14.232 20:23:25 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:14.232 20:23:25 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:14:14.232 20:23:25 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:14:14.232 20:23:25 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:14:14.232 20:23:25 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:14:14.232 20:23:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:14:14.232 20:23:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:14.232 20:23:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:14.232 20:23:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:14:14.232 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:14:14.232 20:23:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:14:14.232 20:23:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:14:14.232 20:23:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:14:14.232 20:23:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:14:14.232 20:23:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:14:14.232 20:23:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:14:14.232 20:23:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:14.232 20:23:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:14:14.232 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:14:14.232 20:23:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:14:14.232 20:23:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:14:14.232 20:23:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:14:14.232 20:23:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:14:14.232 20:23:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:14:14.232 20:23:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:14:14.232 20:23:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:14.232 20:23:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:14:14.233 20:23:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:14.233 20:23:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:14.233 20:23:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:14:14.233 20:23:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:14.233 20:23:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:14.233 20:23:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:14:14.233 Found net devices under 0000:da:00.0: mlx_0_0 00:14:14.233 20:23:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:14.233 20:23:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:14.233 20:23:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:14.233 20:23:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:14:14.233 20:23:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:14.233 20:23:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:14.233 20:23:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:14:14.233 Found net devices under 0000:da:00.1: mlx_0_1 00:14:14.233 20:23:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:14.233 20:23:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:14.233 20:23:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@414 -- # is_hw=yes 00:14:14.233 20:23:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:14.233 20:23:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:14:14.233 20:23:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:14:14.233 20:23:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@420 -- # rdma_device_init 00:14:14.233 20:23:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:14:14.233 20:23:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@58 -- # uname 00:14:14.233 20:23:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:14:14.233 20:23:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@62 -- # modprobe ib_cm 00:14:14.233 20:23:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@63 -- # modprobe ib_core 00:14:14.233 20:23:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@64 -- # modprobe ib_umad 00:14:14.233 20:23:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:14:14.233 20:23:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@66 -- # modprobe iw_cm 00:14:14.233 20:23:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:14:14.233 20:23:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:14:14.233 20:23:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@502 -- # allocate_nic_ips 00:14:14.233 20:23:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:14:14.233 20:23:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@73 -- # get_rdma_if_list 00:14:14.233 20:23:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:14:14.233 20:23:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:14:14.233 20:23:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:14:14.233 20:23:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:14:14.233 20:23:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:14:14.233 20:23:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:14.233 20:23:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:14.233 20:23:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:14:14.233 20:23:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@104 -- # echo mlx_0_0 00:14:14.233 20:23:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@105 -- # continue 2 00:14:14.233 20:23:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:14.233 20:23:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:14.233 20:23:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:14:14.233 20:23:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:14.233 20:23:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:14:14.233 20:23:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@104 -- # echo mlx_0_1 00:14:14.233 20:23:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@105 -- # continue 2 00:14:14.233 20:23:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:14:14.233 20:23:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:14:14.233 20:23:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:14:14.233 20:23:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:14:14.233 20:23:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:14.233 20:23:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:14.233 20:23:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:14:14.233 20:23:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:14:14.233 20:23:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:14:14.233 258: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:14:14.233 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:14:14.233 altname enp218s0f0np0 00:14:14.233 altname ens818f0np0 00:14:14.233 inet 192.168.100.8/24 scope global mlx_0_0 00:14:14.233 valid_lft forever preferred_lft forever 00:14:14.233 20:23:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:14:14.233 20:23:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:14:14.233 20:23:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:14:14.233 20:23:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:14:14.233 20:23:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:14.233 20:23:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:14.233 20:23:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:14:14.233 20:23:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:14:14.233 20:23:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:14:14.233 259: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:14:14.233 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:14:14.233 altname enp218s0f1np1 00:14:14.233 altname ens818f1np1 00:14:14.233 inet 192.168.100.9/24 scope global mlx_0_1 00:14:14.233 valid_lft forever preferred_lft forever 00:14:14.233 20:23:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@422 -- # return 0 00:14:14.233 20:23:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:14.233 20:23:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:14:14.233 20:23:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:14:14.233 20:23:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:14:14.233 20:23:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@86 -- # get_rdma_if_list 00:14:14.233 20:23:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:14:14.233 20:23:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:14:14.233 20:23:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:14:14.233 20:23:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:14:14.233 20:23:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:14:14.233 20:23:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:14.233 20:23:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:14.233 20:23:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:14:14.233 20:23:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@104 -- # echo mlx_0_0 00:14:14.233 20:23:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@105 -- # continue 2 00:14:14.233 20:23:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:14.233 20:23:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:14.233 20:23:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:14:14.233 20:23:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:14.233 20:23:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:14:14.233 20:23:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@104 -- # echo mlx_0_1 00:14:14.233 20:23:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@105 -- # continue 2 00:14:14.233 20:23:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:14:14.233 20:23:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:14:14.233 20:23:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:14:14.233 20:23:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:14:14.233 20:23:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:14.233 20:23:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:14.233 20:23:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:14:14.233 20:23:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:14:14.233 20:23:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:14:14.233 20:23:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:14:14.233 20:23:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:14.233 20:23:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:14.233 20:23:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:14:14.233 192.168.100.9' 00:14:14.233 20:23:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:14:14.233 192.168.100.9' 00:14:14.233 20:23:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@457 -- # head -n 1 00:14:14.233 20:23:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:14:14.233 20:23:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:14:14.233 192.168.100.9' 00:14:14.233 20:23:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@458 -- # tail -n +2 00:14:14.234 20:23:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@458 -- # head -n 1 00:14:14.234 20:23:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:14:14.234 20:23:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:14:14.234 20:23:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:14:14.234 20:23:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:14:14.234 20:23:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:14:14.234 20:23:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:14:14.234 20:23:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:14:14.234 20:23:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:14.234 20:23:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@720 -- # xtrace_disable 00:14:14.234 20:23:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:14.234 20:23:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=3037580 00:14:14.234 20:23:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 3037580 00:14:14.234 20:23:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@827 -- # '[' -z 3037580 ']' 00:14:14.234 20:23:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:14.234 20:23:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:14.234 20:23:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:14.234 20:23:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:14:14.234 20:23:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:14.234 20:23:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:14.234 20:23:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:14.234 20:23:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@860 -- # return 0 00:14:14.234 20:23:27 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:14.234 20:23:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:14.234 20:23:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:14.234 20:23:27 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:14.234 20:23:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=3037818 00:14:14.234 20:23:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:14:14.234 20:23:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:14:14.234 20:23:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:14:14.234 20:23:27 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:14:14.234 20:23:27 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:14.234 20:23:27 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:14:14.234 20:23:27 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:14:14.234 20:23:27 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:14:14.234 20:23:27 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:14:14.234 20:23:27 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@727 -- # key=eadfb4593700d7d06920352fe279172a0eecf810e46b8502 00:14:14.234 20:23:27 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:14:14.234 20:23:27 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.bbv 00:14:14.234 20:23:27 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key eadfb4593700d7d06920352fe279172a0eecf810e46b8502 0 00:14:14.234 20:23:27 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 eadfb4593700d7d06920352fe279172a0eecf810e46b8502 0 00:14:14.234 20:23:27 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:14:14.234 20:23:27 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:14:14.234 20:23:27 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # key=eadfb4593700d7d06920352fe279172a0eecf810e46b8502 00:14:14.234 20:23:27 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:14:14.234 20:23:27 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:14:14.234 20:23:27 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.bbv 00:14:14.234 20:23:27 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.bbv 00:14:14.234 20:23:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.bbv 00:14:14.234 20:23:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:14:14.234 20:23:27 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:14:14.234 20:23:27 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:14.234 20:23:27 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:14:14.234 20:23:27 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:14:14.234 20:23:27 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:14:14.234 20:23:27 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:14:14.234 20:23:27 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@727 -- # key=ad5956ad7e7041e5945e71e71313622b05b0f77bab290645fb82f874ee6e7228 00:14:14.234 20:23:27 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:14:14.234 20:23:27 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.dV0 00:14:14.234 20:23:27 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key ad5956ad7e7041e5945e71e71313622b05b0f77bab290645fb82f874ee6e7228 3 00:14:14.234 20:23:27 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 ad5956ad7e7041e5945e71e71313622b05b0f77bab290645fb82f874ee6e7228 3 00:14:14.234 20:23:27 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:14:14.234 20:23:27 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:14:14.234 20:23:27 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # key=ad5956ad7e7041e5945e71e71313622b05b0f77bab290645fb82f874ee6e7228 00:14:14.234 20:23:27 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:14:14.234 20:23:27 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:14:14.234 20:23:27 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.dV0 00:14:14.234 20:23:27 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.dV0 00:14:14.234 20:23:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.dV0 00:14:14.234 20:23:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:14:14.234 20:23:27 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:14:14.234 20:23:27 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:14.234 20:23:27 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:14:14.234 20:23:27 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:14:14.234 20:23:27 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:14:14.234 20:23:27 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:14:14.234 20:23:27 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@727 -- # key=2917c46c0c8f8efeceb6b53e74d5d861 00:14:14.234 20:23:27 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:14:14.234 20:23:27 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.AUk 00:14:14.234 20:23:27 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 2917c46c0c8f8efeceb6b53e74d5d861 1 00:14:14.234 20:23:27 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 2917c46c0c8f8efeceb6b53e74d5d861 1 00:14:14.234 20:23:27 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:14:14.234 20:23:27 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:14:14.234 20:23:27 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # key=2917c46c0c8f8efeceb6b53e74d5d861 00:14:14.234 20:23:27 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:14:14.234 20:23:27 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:14:14.494 20:23:27 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.AUk 00:14:14.494 20:23:27 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.AUk 00:14:14.494 20:23:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.AUk 00:14:14.494 20:23:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:14:14.494 20:23:27 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:14:14.494 20:23:27 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:14.494 20:23:27 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:14:14.494 20:23:27 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:14:14.494 20:23:27 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:14:14.494 20:23:27 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:14:14.494 20:23:27 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@727 -- # key=e45a1c84c404477aac7275cdc4de974bd0aae0ff3ee18715 00:14:14.494 20:23:27 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:14:14.494 20:23:27 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.gTU 00:14:14.494 20:23:27 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key e45a1c84c404477aac7275cdc4de974bd0aae0ff3ee18715 2 00:14:14.494 20:23:27 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 e45a1c84c404477aac7275cdc4de974bd0aae0ff3ee18715 2 00:14:14.494 20:23:27 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:14:14.494 20:23:27 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:14:14.494 20:23:27 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # key=e45a1c84c404477aac7275cdc4de974bd0aae0ff3ee18715 00:14:14.494 20:23:27 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:14:14.494 20:23:27 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:14:14.494 20:23:27 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.gTU 00:14:14.494 20:23:27 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.gTU 00:14:14.494 20:23:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.gTU 00:14:14.494 20:23:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:14:14.494 20:23:27 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:14:14.494 20:23:27 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:14.494 20:23:27 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:14:14.494 20:23:27 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:14:14.494 20:23:27 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:14:14.494 20:23:27 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:14:14.494 20:23:27 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@727 -- # key=55813229de0dafc5321b545d08eb68b72ab003f7cb0942df 00:14:14.494 20:23:27 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:14:14.494 20:23:27 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.8Rr 00:14:14.494 20:23:27 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 55813229de0dafc5321b545d08eb68b72ab003f7cb0942df 2 00:14:14.494 20:23:27 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 55813229de0dafc5321b545d08eb68b72ab003f7cb0942df 2 00:14:14.494 20:23:27 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:14:14.494 20:23:27 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:14:14.494 20:23:27 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # key=55813229de0dafc5321b545d08eb68b72ab003f7cb0942df 00:14:14.494 20:23:27 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:14:14.494 20:23:27 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:14:14.494 20:23:27 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.8Rr 00:14:14.494 20:23:27 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.8Rr 00:14:14.494 20:23:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.8Rr 00:14:14.494 20:23:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:14:14.494 20:23:27 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:14:14.494 20:23:27 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:14.494 20:23:27 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:14:14.494 20:23:27 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:14:14.494 20:23:27 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:14:14.494 20:23:27 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:14:14.494 20:23:27 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@727 -- # key=bf3707f6648f29d9673154df433df11f 00:14:14.494 20:23:27 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:14:14.494 20:23:27 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.Ulm 00:14:14.494 20:23:27 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key bf3707f6648f29d9673154df433df11f 1 00:14:14.494 20:23:27 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 bf3707f6648f29d9673154df433df11f 1 00:14:14.494 20:23:27 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:14:14.494 20:23:27 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:14:14.494 20:23:27 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # key=bf3707f6648f29d9673154df433df11f 00:14:14.494 20:23:27 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:14:14.494 20:23:27 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:14:14.494 20:23:27 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.Ulm 00:14:14.494 20:23:27 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.Ulm 00:14:14.494 20:23:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.Ulm 00:14:14.494 20:23:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:14:14.494 20:23:27 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:14:14.494 20:23:27 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:14.494 20:23:27 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:14:14.494 20:23:27 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:14:14.494 20:23:27 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:14:14.494 20:23:27 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:14:14.494 20:23:27 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@727 -- # key=52519fb4048fa5abed8eca0eb6e5e5426b8d69660a17e3eabaefbf1d0d1088e6 00:14:14.494 20:23:27 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:14:14.494 20:23:27 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.M2t 00:14:14.494 20:23:27 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 52519fb4048fa5abed8eca0eb6e5e5426b8d69660a17e3eabaefbf1d0d1088e6 3 00:14:14.494 20:23:27 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 52519fb4048fa5abed8eca0eb6e5e5426b8d69660a17e3eabaefbf1d0d1088e6 3 00:14:14.494 20:23:27 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:14:14.494 20:23:27 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:14:14.494 20:23:27 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # key=52519fb4048fa5abed8eca0eb6e5e5426b8d69660a17e3eabaefbf1d0d1088e6 00:14:14.494 20:23:27 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:14:14.494 20:23:27 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:14:14.753 20:23:27 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.M2t 00:14:14.753 20:23:27 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.M2t 00:14:14.753 20:23:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.M2t 00:14:14.753 20:23:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:14:14.753 20:23:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 3037580 00:14:14.753 20:23:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@827 -- # '[' -z 3037580 ']' 00:14:14.753 20:23:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:14.753 20:23:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:14.753 20:23:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:14.753 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:14.753 20:23:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:14.753 20:23:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:14.753 20:23:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:14.753 20:23:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@860 -- # return 0 00:14:14.753 20:23:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 3037818 /var/tmp/host.sock 00:14:14.753 20:23:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@827 -- # '[' -z 3037818 ']' 00:14:14.753 20:23:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/host.sock 00:14:14.753 20:23:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:14.753 20:23:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:14:14.753 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:14:14.753 20:23:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:14.753 20:23:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:15.012 20:23:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:15.012 20:23:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@860 -- # return 0 00:14:15.012 20:23:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:14:15.012 20:23:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:15.012 20:23:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:15.012 20:23:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:15.012 20:23:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:14:15.012 20:23:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.bbv 00:14:15.012 20:23:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:15.012 20:23:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:15.012 20:23:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:15.012 20:23:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.bbv 00:14:15.012 20:23:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.bbv 00:14:15.271 20:23:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.dV0 ]] 00:14:15.271 20:23:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.dV0 00:14:15.271 20:23:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:15.271 20:23:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:15.271 20:23:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:15.271 20:23:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.dV0 00:14:15.271 20:23:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.dV0 00:14:15.531 20:23:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:14:15.531 20:23:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.AUk 00:14:15.531 20:23:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:15.531 20:23:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:15.531 20:23:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:15.531 20:23:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.AUk 00:14:15.531 20:23:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.AUk 00:14:15.531 20:23:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.gTU ]] 00:14:15.531 20:23:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.gTU 00:14:15.531 20:23:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:15.531 20:23:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:15.531 20:23:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:15.531 20:23:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.gTU 00:14:15.531 20:23:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.gTU 00:14:15.789 20:23:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:14:15.789 20:23:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.8Rr 00:14:15.789 20:23:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:15.789 20:23:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:15.789 20:23:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:15.789 20:23:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.8Rr 00:14:15.789 20:23:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.8Rr 00:14:16.048 20:23:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.Ulm ]] 00:14:16.048 20:23:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Ulm 00:14:16.048 20:23:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:16.048 20:23:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:16.048 20:23:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:16.048 20:23:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Ulm 00:14:16.048 20:23:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Ulm 00:14:16.048 20:23:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:14:16.048 20:23:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.M2t 00:14:16.048 20:23:29 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:16.048 20:23:29 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:16.307 20:23:29 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:16.307 20:23:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.M2t 00:14:16.307 20:23:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.M2t 00:14:16.307 20:23:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:14:16.307 20:23:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:14:16.307 20:23:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:16.307 20:23:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:16.307 20:23:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:16.307 20:23:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:16.566 20:23:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:14:16.566 20:23:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:16.566 20:23:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:16.566 20:23:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:14:16.566 20:23:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:16.566 20:23:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:16.566 20:23:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:16.566 20:23:29 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:16.566 20:23:29 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:16.566 20:23:29 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:16.566 20:23:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:16.566 20:23:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:16.825 00:14:16.825 20:23:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:16.825 20:23:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:16.825 20:23:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:17.083 20:23:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:17.083 20:23:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:17.084 20:23:29 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:17.084 20:23:29 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:17.084 20:23:29 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:17.084 20:23:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:17.084 { 00:14:17.084 "cntlid": 1, 00:14:17.084 "qid": 0, 00:14:17.084 "state": "enabled", 00:14:17.084 "listen_address": { 00:14:17.084 "trtype": "RDMA", 00:14:17.084 "adrfam": "IPv4", 00:14:17.084 "traddr": "192.168.100.8", 00:14:17.084 "trsvcid": "4420" 00:14:17.084 }, 00:14:17.084 "peer_address": { 00:14:17.084 "trtype": "RDMA", 00:14:17.084 "adrfam": "IPv4", 00:14:17.084 "traddr": "192.168.100.8", 00:14:17.084 "trsvcid": "56771" 00:14:17.084 }, 00:14:17.084 "auth": { 00:14:17.084 "state": "completed", 00:14:17.084 "digest": "sha256", 00:14:17.084 "dhgroup": "null" 00:14:17.084 } 00:14:17.084 } 00:14:17.084 ]' 00:14:17.084 20:23:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:17.084 20:23:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:17.084 20:23:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:17.084 20:23:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:14:17.084 20:23:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:17.084 20:23:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:17.084 20:23:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:17.084 20:23:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:17.342 20:23:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:00:ZWFkZmI0NTkzNzAwZDdkMDY5MjAzNTJmZTI3OTE3MmEwZWVjZjgxMGU0NmI4NTAyYxRRfg==: --dhchap-ctrl-secret DHHC-1:03:YWQ1OTU2YWQ3ZTcwNDFlNTk0NWU3MWU3MTMxMzYyMmIwNWIwZjc3YmFiMjkwNjQ1ZmI4MmY4NzRlZTZlNzIyOGZgvG4=: 00:14:17.908 20:23:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:17.908 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:17.908 20:23:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:14:17.908 20:23:30 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:17.908 20:23:30 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:17.908 20:23:30 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:17.908 20:23:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:17.908 20:23:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:17.908 20:23:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:18.167 20:23:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:14:18.167 20:23:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:18.167 20:23:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:18.167 20:23:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:14:18.167 20:23:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:18.167 20:23:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:18.167 20:23:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:18.167 20:23:31 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:18.167 20:23:31 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:18.167 20:23:31 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:18.167 20:23:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:18.167 20:23:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:18.425 00:14:18.425 20:23:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:18.425 20:23:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:18.425 20:23:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:18.683 20:23:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:18.683 20:23:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:18.683 20:23:31 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:18.683 20:23:31 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:18.683 20:23:31 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:18.683 20:23:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:18.683 { 00:14:18.683 "cntlid": 3, 00:14:18.683 "qid": 0, 00:14:18.683 "state": "enabled", 00:14:18.683 "listen_address": { 00:14:18.683 "trtype": "RDMA", 00:14:18.683 "adrfam": "IPv4", 00:14:18.683 "traddr": "192.168.100.8", 00:14:18.683 "trsvcid": "4420" 00:14:18.683 }, 00:14:18.683 "peer_address": { 00:14:18.683 "trtype": "RDMA", 00:14:18.683 "adrfam": "IPv4", 00:14:18.683 "traddr": "192.168.100.8", 00:14:18.683 "trsvcid": "42532" 00:14:18.683 }, 00:14:18.683 "auth": { 00:14:18.683 "state": "completed", 00:14:18.683 "digest": "sha256", 00:14:18.683 "dhgroup": "null" 00:14:18.683 } 00:14:18.683 } 00:14:18.683 ]' 00:14:18.683 20:23:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:18.683 20:23:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:18.683 20:23:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:18.683 20:23:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:14:18.683 20:23:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:18.683 20:23:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:18.683 20:23:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:18.683 20:23:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:18.941 20:23:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:01:MjkxN2M0NmMwYzhmOGVmZWNlYjZiNTNlNzRkNWQ4NjEJtzcK: --dhchap-ctrl-secret DHHC-1:02:ZTQ1YTFjODRjNDA0NDc3YWFjNzI3NWNkYzRkZTk3NGJkMGFhZTBmZjNlZTE4NzE1y4GuKA==: 00:14:19.567 20:23:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:19.567 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:19.567 20:23:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:14:19.567 20:23:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:19.567 20:23:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:19.829 20:23:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:19.829 20:23:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:19.829 20:23:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:19.829 20:23:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:19.829 20:23:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:14:19.829 20:23:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:19.829 20:23:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:19.829 20:23:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:14:19.829 20:23:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:19.829 20:23:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:19.829 20:23:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:19.829 20:23:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:19.829 20:23:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:19.829 20:23:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:19.829 20:23:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:19.829 20:23:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:20.087 00:14:20.087 20:23:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:20.087 20:23:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:20.087 20:23:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:20.345 20:23:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:20.345 20:23:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:20.345 20:23:33 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:20.345 20:23:33 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:20.345 20:23:33 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:20.345 20:23:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:20.346 { 00:14:20.346 "cntlid": 5, 00:14:20.346 "qid": 0, 00:14:20.346 "state": "enabled", 00:14:20.346 "listen_address": { 00:14:20.346 "trtype": "RDMA", 00:14:20.346 "adrfam": "IPv4", 00:14:20.346 "traddr": "192.168.100.8", 00:14:20.346 "trsvcid": "4420" 00:14:20.346 }, 00:14:20.346 "peer_address": { 00:14:20.346 "trtype": "RDMA", 00:14:20.346 "adrfam": "IPv4", 00:14:20.346 "traddr": "192.168.100.8", 00:14:20.346 "trsvcid": "44657" 00:14:20.346 }, 00:14:20.346 "auth": { 00:14:20.346 "state": "completed", 00:14:20.346 "digest": "sha256", 00:14:20.346 "dhgroup": "null" 00:14:20.346 } 00:14:20.346 } 00:14:20.346 ]' 00:14:20.346 20:23:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:20.346 20:23:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:20.346 20:23:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:20.346 20:23:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:14:20.346 20:23:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:20.346 20:23:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:20.346 20:23:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:20.346 20:23:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:20.604 20:23:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:02:NTU4MTMyMjlkZTBkYWZjNTMyMWI1NDVkMDhlYjY4YjcyYWIwMDNmN2NiMDk0MmRm0wFoZA==: --dhchap-ctrl-secret DHHC-1:01:YmYzNzA3ZjY2NDhmMjlkOTY3MzE1NGRmNDMzZGYxMWbOAb25: 00:14:21.168 20:23:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:21.427 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:21.427 20:23:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:14:21.427 20:23:34 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:21.427 20:23:34 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:21.427 20:23:34 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:21.427 20:23:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:21.427 20:23:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:21.427 20:23:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:21.427 20:23:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:14:21.427 20:23:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:21.427 20:23:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:21.427 20:23:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:14:21.427 20:23:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:21.427 20:23:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:21.427 20:23:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key3 00:14:21.427 20:23:34 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:21.427 20:23:34 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:21.427 20:23:34 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:21.427 20:23:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:21.427 20:23:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:21.686 00:14:21.686 20:23:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:21.686 20:23:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:21.686 20:23:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:21.944 20:23:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:21.944 20:23:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:21.944 20:23:34 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:21.944 20:23:34 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:21.944 20:23:34 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:21.944 20:23:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:21.944 { 00:14:21.944 "cntlid": 7, 00:14:21.944 "qid": 0, 00:14:21.944 "state": "enabled", 00:14:21.944 "listen_address": { 00:14:21.944 "trtype": "RDMA", 00:14:21.944 "adrfam": "IPv4", 00:14:21.944 "traddr": "192.168.100.8", 00:14:21.944 "trsvcid": "4420" 00:14:21.944 }, 00:14:21.944 "peer_address": { 00:14:21.944 "trtype": "RDMA", 00:14:21.944 "adrfam": "IPv4", 00:14:21.944 "traddr": "192.168.100.8", 00:14:21.944 "trsvcid": "50608" 00:14:21.944 }, 00:14:21.944 "auth": { 00:14:21.944 "state": "completed", 00:14:21.944 "digest": "sha256", 00:14:21.944 "dhgroup": "null" 00:14:21.944 } 00:14:21.944 } 00:14:21.944 ]' 00:14:21.944 20:23:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:21.944 20:23:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:21.944 20:23:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:21.944 20:23:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:14:21.944 20:23:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:21.944 20:23:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:21.944 20:23:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:21.944 20:23:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:22.202 20:23:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:03:NTI1MTlmYjQwNDhmYTVhYmVkOGVjYTBlYjZlNWU1NDI2YjhkNjk2NjBhMTdlM2VhYmFlZmJmMWQwZDEwODhlNizRJFA=: 00:14:22.775 20:23:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:23.032 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:23.032 20:23:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:14:23.033 20:23:35 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:23.033 20:23:35 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:23.033 20:23:35 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:23.033 20:23:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:23.033 20:23:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:23.033 20:23:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:23.033 20:23:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:23.033 20:23:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:14:23.033 20:23:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:23.033 20:23:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:23.033 20:23:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:14:23.033 20:23:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:23.033 20:23:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:23.033 20:23:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:23.033 20:23:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:23.033 20:23:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:23.033 20:23:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:23.033 20:23:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:23.033 20:23:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:23.291 00:14:23.291 20:23:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:23.291 20:23:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:23.291 20:23:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:23.548 20:23:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:23.548 20:23:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:23.548 20:23:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:23.548 20:23:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:23.548 20:23:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:23.548 20:23:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:23.548 { 00:14:23.548 "cntlid": 9, 00:14:23.548 "qid": 0, 00:14:23.548 "state": "enabled", 00:14:23.548 "listen_address": { 00:14:23.548 "trtype": "RDMA", 00:14:23.548 "adrfam": "IPv4", 00:14:23.548 "traddr": "192.168.100.8", 00:14:23.548 "trsvcid": "4420" 00:14:23.548 }, 00:14:23.548 "peer_address": { 00:14:23.548 "trtype": "RDMA", 00:14:23.548 "adrfam": "IPv4", 00:14:23.548 "traddr": "192.168.100.8", 00:14:23.548 "trsvcid": "58730" 00:14:23.548 }, 00:14:23.548 "auth": { 00:14:23.548 "state": "completed", 00:14:23.548 "digest": "sha256", 00:14:23.548 "dhgroup": "ffdhe2048" 00:14:23.548 } 00:14:23.548 } 00:14:23.548 ]' 00:14:23.548 20:23:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:23.548 20:23:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:23.548 20:23:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:23.806 20:23:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:23.806 20:23:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:23.806 20:23:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:23.806 20:23:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:23.806 20:23:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:23.806 20:23:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:00:ZWFkZmI0NTkzNzAwZDdkMDY5MjAzNTJmZTI3OTE3MmEwZWVjZjgxMGU0NmI4NTAyYxRRfg==: --dhchap-ctrl-secret DHHC-1:03:YWQ1OTU2YWQ3ZTcwNDFlNTk0NWU3MWU3MTMxMzYyMmIwNWIwZjc3YmFiMjkwNjQ1ZmI4MmY4NzRlZTZlNzIyOGZgvG4=: 00:14:24.771 20:23:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:24.771 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:24.771 20:23:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:14:24.771 20:23:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:24.771 20:23:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:24.771 20:23:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:24.771 20:23:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:24.771 20:23:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:24.771 20:23:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:24.771 20:23:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:14:24.771 20:23:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:24.771 20:23:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:24.771 20:23:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:14:24.771 20:23:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:24.771 20:23:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:24.771 20:23:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:24.771 20:23:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:24.771 20:23:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:24.771 20:23:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:24.771 20:23:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:24.771 20:23:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:25.030 00:14:25.030 20:23:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:25.030 20:23:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:25.030 20:23:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:25.288 20:23:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:25.288 20:23:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:25.288 20:23:38 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:25.288 20:23:38 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:25.288 20:23:38 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:25.288 20:23:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:25.288 { 00:14:25.288 "cntlid": 11, 00:14:25.288 "qid": 0, 00:14:25.288 "state": "enabled", 00:14:25.288 "listen_address": { 00:14:25.288 "trtype": "RDMA", 00:14:25.288 "adrfam": "IPv4", 00:14:25.288 "traddr": "192.168.100.8", 00:14:25.289 "trsvcid": "4420" 00:14:25.289 }, 00:14:25.289 "peer_address": { 00:14:25.289 "trtype": "RDMA", 00:14:25.289 "adrfam": "IPv4", 00:14:25.289 "traddr": "192.168.100.8", 00:14:25.289 "trsvcid": "44906" 00:14:25.289 }, 00:14:25.289 "auth": { 00:14:25.289 "state": "completed", 00:14:25.289 "digest": "sha256", 00:14:25.289 "dhgroup": "ffdhe2048" 00:14:25.289 } 00:14:25.289 } 00:14:25.289 ]' 00:14:25.289 20:23:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:25.289 20:23:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:25.289 20:23:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:25.289 20:23:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:25.289 20:23:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:25.289 20:23:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:25.289 20:23:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:25.289 20:23:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:25.547 20:23:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:01:MjkxN2M0NmMwYzhmOGVmZWNlYjZiNTNlNzRkNWQ4NjEJtzcK: --dhchap-ctrl-secret DHHC-1:02:ZTQ1YTFjODRjNDA0NDc3YWFjNzI3NWNkYzRkZTk3NGJkMGFhZTBmZjNlZTE4NzE1y4GuKA==: 00:14:26.115 20:23:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:26.375 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:26.375 20:23:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:14:26.375 20:23:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:26.375 20:23:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:26.375 20:23:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:26.375 20:23:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:26.375 20:23:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:26.375 20:23:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:26.375 20:23:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:14:26.375 20:23:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:26.375 20:23:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:26.375 20:23:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:14:26.375 20:23:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:26.375 20:23:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:26.375 20:23:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:26.375 20:23:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:26.375 20:23:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:26.375 20:23:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:26.375 20:23:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:26.375 20:23:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:26.634 00:14:26.634 20:23:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:26.634 20:23:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:26.634 20:23:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:26.893 20:23:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:26.893 20:23:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:26.893 20:23:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:26.893 20:23:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:26.893 20:23:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:26.893 20:23:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:26.893 { 00:14:26.893 "cntlid": 13, 00:14:26.893 "qid": 0, 00:14:26.893 "state": "enabled", 00:14:26.893 "listen_address": { 00:14:26.893 "trtype": "RDMA", 00:14:26.893 "adrfam": "IPv4", 00:14:26.893 "traddr": "192.168.100.8", 00:14:26.893 "trsvcid": "4420" 00:14:26.893 }, 00:14:26.893 "peer_address": { 00:14:26.893 "trtype": "RDMA", 00:14:26.893 "adrfam": "IPv4", 00:14:26.893 "traddr": "192.168.100.8", 00:14:26.893 "trsvcid": "38654" 00:14:26.893 }, 00:14:26.893 "auth": { 00:14:26.893 "state": "completed", 00:14:26.893 "digest": "sha256", 00:14:26.893 "dhgroup": "ffdhe2048" 00:14:26.893 } 00:14:26.893 } 00:14:26.893 ]' 00:14:26.893 20:23:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:26.893 20:23:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:26.893 20:23:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:26.893 20:23:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:26.893 20:23:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:27.153 20:23:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:27.153 20:23:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:27.153 20:23:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:27.153 20:23:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:02:NTU4MTMyMjlkZTBkYWZjNTMyMWI1NDVkMDhlYjY4YjcyYWIwMDNmN2NiMDk0MmRm0wFoZA==: --dhchap-ctrl-secret DHHC-1:01:YmYzNzA3ZjY2NDhmMjlkOTY3MzE1NGRmNDMzZGYxMWbOAb25: 00:14:27.721 20:23:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:27.980 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:27.980 20:23:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:14:27.980 20:23:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:27.980 20:23:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:27.980 20:23:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:27.980 20:23:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:27.980 20:23:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:27.980 20:23:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:28.239 20:23:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:14:28.239 20:23:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:28.239 20:23:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:28.239 20:23:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:14:28.239 20:23:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:28.239 20:23:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:28.239 20:23:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key3 00:14:28.239 20:23:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:28.239 20:23:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:28.239 20:23:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:28.239 20:23:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:28.239 20:23:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:28.239 00:14:28.498 20:23:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:28.498 20:23:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:28.498 20:23:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:28.498 20:23:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:28.498 20:23:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:28.498 20:23:41 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:28.498 20:23:41 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:28.498 20:23:41 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:28.498 20:23:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:28.498 { 00:14:28.498 "cntlid": 15, 00:14:28.498 "qid": 0, 00:14:28.498 "state": "enabled", 00:14:28.498 "listen_address": { 00:14:28.498 "trtype": "RDMA", 00:14:28.498 "adrfam": "IPv4", 00:14:28.498 "traddr": "192.168.100.8", 00:14:28.498 "trsvcid": "4420" 00:14:28.498 }, 00:14:28.498 "peer_address": { 00:14:28.498 "trtype": "RDMA", 00:14:28.498 "adrfam": "IPv4", 00:14:28.498 "traddr": "192.168.100.8", 00:14:28.498 "trsvcid": "45249" 00:14:28.498 }, 00:14:28.498 "auth": { 00:14:28.498 "state": "completed", 00:14:28.498 "digest": "sha256", 00:14:28.498 "dhgroup": "ffdhe2048" 00:14:28.498 } 00:14:28.498 } 00:14:28.498 ]' 00:14:28.498 20:23:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:28.498 20:23:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:28.498 20:23:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:28.757 20:23:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:28.757 20:23:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:28.757 20:23:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:28.757 20:23:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:28.757 20:23:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:28.757 20:23:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:03:NTI1MTlmYjQwNDhmYTVhYmVkOGVjYTBlYjZlNWU1NDI2YjhkNjk2NjBhMTdlM2VhYmFlZmJmMWQwZDEwODhlNizRJFA=: 00:14:29.694 20:23:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:29.694 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:29.694 20:23:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:14:29.694 20:23:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:29.694 20:23:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:29.694 20:23:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:29.694 20:23:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:29.694 20:23:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:29.694 20:23:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:29.694 20:23:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:29.694 20:23:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:14:29.694 20:23:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:29.694 20:23:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:29.694 20:23:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:14:29.694 20:23:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:29.694 20:23:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:29.694 20:23:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:29.694 20:23:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:29.694 20:23:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:29.694 20:23:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:29.694 20:23:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:29.694 20:23:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:29.953 00:14:29.953 20:23:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:29.953 20:23:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:29.953 20:23:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:30.211 20:23:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:30.212 20:23:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:30.212 20:23:43 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:30.212 20:23:43 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:30.212 20:23:43 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:30.212 20:23:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:30.212 { 00:14:30.212 "cntlid": 17, 00:14:30.212 "qid": 0, 00:14:30.212 "state": "enabled", 00:14:30.212 "listen_address": { 00:14:30.212 "trtype": "RDMA", 00:14:30.212 "adrfam": "IPv4", 00:14:30.212 "traddr": "192.168.100.8", 00:14:30.212 "trsvcid": "4420" 00:14:30.212 }, 00:14:30.212 "peer_address": { 00:14:30.212 "trtype": "RDMA", 00:14:30.212 "adrfam": "IPv4", 00:14:30.212 "traddr": "192.168.100.8", 00:14:30.212 "trsvcid": "51965" 00:14:30.212 }, 00:14:30.212 "auth": { 00:14:30.212 "state": "completed", 00:14:30.212 "digest": "sha256", 00:14:30.212 "dhgroup": "ffdhe3072" 00:14:30.212 } 00:14:30.212 } 00:14:30.212 ]' 00:14:30.212 20:23:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:30.212 20:23:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:30.212 20:23:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:30.212 20:23:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:30.212 20:23:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:30.212 20:23:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:30.212 20:23:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:30.212 20:23:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:30.470 20:23:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:00:ZWFkZmI0NTkzNzAwZDdkMDY5MjAzNTJmZTI3OTE3MmEwZWVjZjgxMGU0NmI4NTAyYxRRfg==: --dhchap-ctrl-secret DHHC-1:03:YWQ1OTU2YWQ3ZTcwNDFlNTk0NWU3MWU3MTMxMzYyMmIwNWIwZjc3YmFiMjkwNjQ1ZmI4MmY4NzRlZTZlNzIyOGZgvG4=: 00:14:31.039 20:23:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:31.298 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:31.298 20:23:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:14:31.298 20:23:44 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:31.298 20:23:44 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:31.298 20:23:44 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:31.298 20:23:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:31.298 20:23:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:31.298 20:23:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:31.298 20:23:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:14:31.298 20:23:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:31.298 20:23:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:31.298 20:23:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:14:31.298 20:23:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:31.298 20:23:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:31.298 20:23:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:31.298 20:23:44 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:31.298 20:23:44 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:31.557 20:23:44 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:31.557 20:23:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:31.557 20:23:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:31.557 00:14:31.557 20:23:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:31.557 20:23:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:31.557 20:23:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:31.816 20:23:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:31.816 20:23:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:31.816 20:23:44 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:31.816 20:23:44 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:31.816 20:23:44 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:31.816 20:23:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:31.816 { 00:14:31.816 "cntlid": 19, 00:14:31.816 "qid": 0, 00:14:31.816 "state": "enabled", 00:14:31.816 "listen_address": { 00:14:31.816 "trtype": "RDMA", 00:14:31.816 "adrfam": "IPv4", 00:14:31.816 "traddr": "192.168.100.8", 00:14:31.816 "trsvcid": "4420" 00:14:31.816 }, 00:14:31.816 "peer_address": { 00:14:31.816 "trtype": "RDMA", 00:14:31.816 "adrfam": "IPv4", 00:14:31.816 "traddr": "192.168.100.8", 00:14:31.816 "trsvcid": "55805" 00:14:31.816 }, 00:14:31.816 "auth": { 00:14:31.816 "state": "completed", 00:14:31.816 "digest": "sha256", 00:14:31.816 "dhgroup": "ffdhe3072" 00:14:31.816 } 00:14:31.816 } 00:14:31.816 ]' 00:14:31.816 20:23:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:31.817 20:23:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:31.817 20:23:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:31.817 20:23:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:31.817 20:23:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:32.077 20:23:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:32.077 20:23:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:32.077 20:23:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:32.077 20:23:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:01:MjkxN2M0NmMwYzhmOGVmZWNlYjZiNTNlNzRkNWQ4NjEJtzcK: --dhchap-ctrl-secret DHHC-1:02:ZTQ1YTFjODRjNDA0NDc3YWFjNzI3NWNkYzRkZTk3NGJkMGFhZTBmZjNlZTE4NzE1y4GuKA==: 00:14:32.646 20:23:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:32.905 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:32.905 20:23:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:14:32.905 20:23:45 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:32.905 20:23:45 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:32.905 20:23:45 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:32.905 20:23:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:32.905 20:23:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:32.905 20:23:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:33.164 20:23:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:14:33.164 20:23:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:33.165 20:23:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:33.165 20:23:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:14:33.165 20:23:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:33.165 20:23:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:33.165 20:23:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:33.165 20:23:45 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:33.165 20:23:45 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:33.165 20:23:45 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:33.165 20:23:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:33.165 20:23:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:33.165 00:14:33.424 20:23:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:33.424 20:23:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:33.424 20:23:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:33.424 20:23:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:33.424 20:23:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:33.424 20:23:46 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:33.424 20:23:46 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:33.424 20:23:46 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:33.424 20:23:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:33.424 { 00:14:33.424 "cntlid": 21, 00:14:33.424 "qid": 0, 00:14:33.424 "state": "enabled", 00:14:33.424 "listen_address": { 00:14:33.424 "trtype": "RDMA", 00:14:33.424 "adrfam": "IPv4", 00:14:33.424 "traddr": "192.168.100.8", 00:14:33.424 "trsvcid": "4420" 00:14:33.424 }, 00:14:33.424 "peer_address": { 00:14:33.424 "trtype": "RDMA", 00:14:33.424 "adrfam": "IPv4", 00:14:33.424 "traddr": "192.168.100.8", 00:14:33.424 "trsvcid": "43364" 00:14:33.424 }, 00:14:33.424 "auth": { 00:14:33.424 "state": "completed", 00:14:33.424 "digest": "sha256", 00:14:33.424 "dhgroup": "ffdhe3072" 00:14:33.424 } 00:14:33.424 } 00:14:33.424 ]' 00:14:33.424 20:23:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:33.424 20:23:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:33.424 20:23:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:33.683 20:23:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:33.683 20:23:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:33.683 20:23:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:33.683 20:23:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:33.683 20:23:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:33.683 20:23:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:02:NTU4MTMyMjlkZTBkYWZjNTMyMWI1NDVkMDhlYjY4YjcyYWIwMDNmN2NiMDk0MmRm0wFoZA==: --dhchap-ctrl-secret DHHC-1:01:YmYzNzA3ZjY2NDhmMjlkOTY3MzE1NGRmNDMzZGYxMWbOAb25: 00:14:34.252 20:23:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:34.512 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:34.512 20:23:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:14:34.512 20:23:47 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:34.512 20:23:47 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:34.512 20:23:47 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:34.512 20:23:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:34.512 20:23:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:34.512 20:23:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:34.771 20:23:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:14:34.771 20:23:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:34.771 20:23:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:34.771 20:23:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:14:34.771 20:23:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:34.771 20:23:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:34.771 20:23:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key3 00:14:34.771 20:23:47 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:34.771 20:23:47 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:34.771 20:23:47 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:34.771 20:23:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:34.771 20:23:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:34.771 00:14:35.030 20:23:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:35.030 20:23:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:35.030 20:23:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:35.030 20:23:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:35.030 20:23:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:35.030 20:23:47 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:35.030 20:23:47 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:35.030 20:23:47 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:35.030 20:23:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:35.030 { 00:14:35.030 "cntlid": 23, 00:14:35.030 "qid": 0, 00:14:35.030 "state": "enabled", 00:14:35.030 "listen_address": { 00:14:35.030 "trtype": "RDMA", 00:14:35.030 "adrfam": "IPv4", 00:14:35.030 "traddr": "192.168.100.8", 00:14:35.030 "trsvcid": "4420" 00:14:35.030 }, 00:14:35.030 "peer_address": { 00:14:35.030 "trtype": "RDMA", 00:14:35.030 "adrfam": "IPv4", 00:14:35.030 "traddr": "192.168.100.8", 00:14:35.030 "trsvcid": "48331" 00:14:35.030 }, 00:14:35.030 "auth": { 00:14:35.030 "state": "completed", 00:14:35.030 "digest": "sha256", 00:14:35.030 "dhgroup": "ffdhe3072" 00:14:35.030 } 00:14:35.030 } 00:14:35.030 ]' 00:14:35.030 20:23:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:35.030 20:23:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:35.030 20:23:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:35.289 20:23:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:35.289 20:23:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:35.289 20:23:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:35.289 20:23:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:35.289 20:23:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:35.289 20:23:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:03:NTI1MTlmYjQwNDhmYTVhYmVkOGVjYTBlYjZlNWU1NDI2YjhkNjk2NjBhMTdlM2VhYmFlZmJmMWQwZDEwODhlNizRJFA=: 00:14:36.225 20:23:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:36.225 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:36.225 20:23:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:14:36.225 20:23:48 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:36.225 20:23:48 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:36.225 20:23:48 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:36.225 20:23:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:36.225 20:23:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:36.225 20:23:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:36.225 20:23:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:36.225 20:23:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:14:36.225 20:23:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:36.225 20:23:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:36.225 20:23:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:14:36.225 20:23:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:36.225 20:23:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:36.225 20:23:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:36.225 20:23:49 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:36.225 20:23:49 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:36.225 20:23:49 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:36.225 20:23:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:36.225 20:23:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:36.484 00:14:36.484 20:23:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:36.485 20:23:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:36.485 20:23:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:36.745 20:23:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:36.745 20:23:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:36.745 20:23:49 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:36.745 20:23:49 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:36.745 20:23:49 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:36.745 20:23:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:36.745 { 00:14:36.745 "cntlid": 25, 00:14:36.745 "qid": 0, 00:14:36.745 "state": "enabled", 00:14:36.745 "listen_address": { 00:14:36.745 "trtype": "RDMA", 00:14:36.745 "adrfam": "IPv4", 00:14:36.745 "traddr": "192.168.100.8", 00:14:36.745 "trsvcid": "4420" 00:14:36.745 }, 00:14:36.745 "peer_address": { 00:14:36.745 "trtype": "RDMA", 00:14:36.745 "adrfam": "IPv4", 00:14:36.745 "traddr": "192.168.100.8", 00:14:36.745 "trsvcid": "39560" 00:14:36.745 }, 00:14:36.745 "auth": { 00:14:36.745 "state": "completed", 00:14:36.745 "digest": "sha256", 00:14:36.745 "dhgroup": "ffdhe4096" 00:14:36.745 } 00:14:36.745 } 00:14:36.745 ]' 00:14:36.745 20:23:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:36.745 20:23:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:36.745 20:23:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:36.745 20:23:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:36.745 20:23:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:36.745 20:23:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:36.745 20:23:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:36.745 20:23:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:37.003 20:23:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:00:ZWFkZmI0NTkzNzAwZDdkMDY5MjAzNTJmZTI3OTE3MmEwZWVjZjgxMGU0NmI4NTAyYxRRfg==: --dhchap-ctrl-secret DHHC-1:03:YWQ1OTU2YWQ3ZTcwNDFlNTk0NWU3MWU3MTMxMzYyMmIwNWIwZjc3YmFiMjkwNjQ1ZmI4MmY4NzRlZTZlNzIyOGZgvG4=: 00:14:37.576 20:23:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:37.835 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:37.835 20:23:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:14:37.835 20:23:50 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:37.835 20:23:50 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:37.835 20:23:50 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:37.835 20:23:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:37.835 20:23:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:37.835 20:23:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:38.094 20:23:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:14:38.094 20:23:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:38.094 20:23:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:38.094 20:23:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:14:38.094 20:23:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:38.094 20:23:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:38.094 20:23:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:38.094 20:23:50 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:38.094 20:23:50 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:38.094 20:23:50 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:38.094 20:23:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:38.094 20:23:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:38.094 00:14:38.353 20:23:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:38.353 20:23:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:38.353 20:23:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:38.353 20:23:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:38.353 20:23:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:38.353 20:23:51 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:38.353 20:23:51 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:38.353 20:23:51 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:38.353 20:23:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:38.353 { 00:14:38.353 "cntlid": 27, 00:14:38.353 "qid": 0, 00:14:38.353 "state": "enabled", 00:14:38.353 "listen_address": { 00:14:38.353 "trtype": "RDMA", 00:14:38.353 "adrfam": "IPv4", 00:14:38.353 "traddr": "192.168.100.8", 00:14:38.353 "trsvcid": "4420" 00:14:38.353 }, 00:14:38.353 "peer_address": { 00:14:38.353 "trtype": "RDMA", 00:14:38.353 "adrfam": "IPv4", 00:14:38.353 "traddr": "192.168.100.8", 00:14:38.353 "trsvcid": "48941" 00:14:38.353 }, 00:14:38.353 "auth": { 00:14:38.353 "state": "completed", 00:14:38.353 "digest": "sha256", 00:14:38.353 "dhgroup": "ffdhe4096" 00:14:38.353 } 00:14:38.353 } 00:14:38.353 ]' 00:14:38.353 20:23:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:38.354 20:23:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:38.354 20:23:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:38.612 20:23:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:38.612 20:23:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:38.612 20:23:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:38.612 20:23:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:38.613 20:23:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:38.613 20:23:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:01:MjkxN2M0NmMwYzhmOGVmZWNlYjZiNTNlNzRkNWQ4NjEJtzcK: --dhchap-ctrl-secret DHHC-1:02:ZTQ1YTFjODRjNDA0NDc3YWFjNzI3NWNkYzRkZTk3NGJkMGFhZTBmZjNlZTE4NzE1y4GuKA==: 00:14:39.550 20:23:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:39.550 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:39.550 20:23:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:14:39.550 20:23:52 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:39.550 20:23:52 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:39.550 20:23:52 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:39.550 20:23:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:39.550 20:23:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:39.550 20:23:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:39.550 20:23:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:14:39.550 20:23:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:39.550 20:23:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:39.550 20:23:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:14:39.550 20:23:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:39.550 20:23:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:39.550 20:23:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:39.550 20:23:52 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:39.550 20:23:52 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:39.550 20:23:52 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:39.550 20:23:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:39.550 20:23:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:39.809 00:14:39.809 20:23:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:39.809 20:23:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:39.809 20:23:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:40.068 20:23:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:40.068 20:23:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:40.068 20:23:52 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:40.068 20:23:52 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:40.068 20:23:52 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:40.068 20:23:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:40.068 { 00:14:40.068 "cntlid": 29, 00:14:40.068 "qid": 0, 00:14:40.068 "state": "enabled", 00:14:40.068 "listen_address": { 00:14:40.068 "trtype": "RDMA", 00:14:40.068 "adrfam": "IPv4", 00:14:40.068 "traddr": "192.168.100.8", 00:14:40.068 "trsvcid": "4420" 00:14:40.068 }, 00:14:40.068 "peer_address": { 00:14:40.068 "trtype": "RDMA", 00:14:40.068 "adrfam": "IPv4", 00:14:40.068 "traddr": "192.168.100.8", 00:14:40.068 "trsvcid": "55679" 00:14:40.068 }, 00:14:40.068 "auth": { 00:14:40.068 "state": "completed", 00:14:40.068 "digest": "sha256", 00:14:40.068 "dhgroup": "ffdhe4096" 00:14:40.068 } 00:14:40.068 } 00:14:40.068 ]' 00:14:40.068 20:23:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:40.068 20:23:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:40.068 20:23:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:40.068 20:23:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:40.068 20:23:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:40.327 20:23:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:40.327 20:23:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:40.327 20:23:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:40.327 20:23:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:02:NTU4MTMyMjlkZTBkYWZjNTMyMWI1NDVkMDhlYjY4YjcyYWIwMDNmN2NiMDk0MmRm0wFoZA==: --dhchap-ctrl-secret DHHC-1:01:YmYzNzA3ZjY2NDhmMjlkOTY3MzE1NGRmNDMzZGYxMWbOAb25: 00:14:40.893 20:23:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:41.153 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:41.153 20:23:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:14:41.153 20:23:53 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:41.153 20:23:53 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:41.153 20:23:53 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:41.153 20:23:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:41.153 20:23:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:41.153 20:23:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:41.413 20:23:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:14:41.413 20:23:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:41.413 20:23:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:41.413 20:23:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:14:41.413 20:23:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:41.413 20:23:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:41.413 20:23:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key3 00:14:41.413 20:23:54 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:41.413 20:23:54 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:41.413 20:23:54 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:41.413 20:23:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:41.413 20:23:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:41.672 00:14:41.672 20:23:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:41.672 20:23:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:41.672 20:23:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:41.672 20:23:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:41.672 20:23:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:41.672 20:23:54 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:41.672 20:23:54 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:41.672 20:23:54 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:41.672 20:23:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:41.672 { 00:14:41.672 "cntlid": 31, 00:14:41.672 "qid": 0, 00:14:41.672 "state": "enabled", 00:14:41.672 "listen_address": { 00:14:41.672 "trtype": "RDMA", 00:14:41.672 "adrfam": "IPv4", 00:14:41.672 "traddr": "192.168.100.8", 00:14:41.672 "trsvcid": "4420" 00:14:41.672 }, 00:14:41.672 "peer_address": { 00:14:41.672 "trtype": "RDMA", 00:14:41.672 "adrfam": "IPv4", 00:14:41.672 "traddr": "192.168.100.8", 00:14:41.672 "trsvcid": "45806" 00:14:41.672 }, 00:14:41.672 "auth": { 00:14:41.672 "state": "completed", 00:14:41.672 "digest": "sha256", 00:14:41.672 "dhgroup": "ffdhe4096" 00:14:41.672 } 00:14:41.672 } 00:14:41.672 ]' 00:14:41.672 20:23:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:41.931 20:23:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:41.931 20:23:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:41.931 20:23:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:41.931 20:23:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:41.931 20:23:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:41.931 20:23:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:41.931 20:23:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:42.189 20:23:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:03:NTI1MTlmYjQwNDhmYTVhYmVkOGVjYTBlYjZlNWU1NDI2YjhkNjk2NjBhMTdlM2VhYmFlZmJmMWQwZDEwODhlNizRJFA=: 00:14:42.757 20:23:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:42.757 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:42.757 20:23:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:14:42.757 20:23:55 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:42.757 20:23:55 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:42.757 20:23:55 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:42.757 20:23:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:42.757 20:23:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:42.757 20:23:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:42.757 20:23:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:43.016 20:23:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:14:43.016 20:23:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:43.016 20:23:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:43.016 20:23:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:14:43.016 20:23:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:43.016 20:23:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:43.017 20:23:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:43.017 20:23:55 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:43.017 20:23:55 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:43.017 20:23:55 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:43.017 20:23:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:43.017 20:23:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:43.276 00:14:43.276 20:23:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:43.276 20:23:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:43.276 20:23:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:43.535 20:23:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:43.535 20:23:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:43.535 20:23:56 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:43.535 20:23:56 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:43.535 20:23:56 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:43.535 20:23:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:43.535 { 00:14:43.535 "cntlid": 33, 00:14:43.535 "qid": 0, 00:14:43.535 "state": "enabled", 00:14:43.535 "listen_address": { 00:14:43.535 "trtype": "RDMA", 00:14:43.535 "adrfam": "IPv4", 00:14:43.535 "traddr": "192.168.100.8", 00:14:43.535 "trsvcid": "4420" 00:14:43.535 }, 00:14:43.535 "peer_address": { 00:14:43.535 "trtype": "RDMA", 00:14:43.535 "adrfam": "IPv4", 00:14:43.536 "traddr": "192.168.100.8", 00:14:43.536 "trsvcid": "59531" 00:14:43.536 }, 00:14:43.536 "auth": { 00:14:43.536 "state": "completed", 00:14:43.536 "digest": "sha256", 00:14:43.536 "dhgroup": "ffdhe6144" 00:14:43.536 } 00:14:43.536 } 00:14:43.536 ]' 00:14:43.536 20:23:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:43.536 20:23:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:43.536 20:23:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:43.536 20:23:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:43.536 20:23:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:43.536 20:23:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:43.536 20:23:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:43.536 20:23:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:43.795 20:23:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:00:ZWFkZmI0NTkzNzAwZDdkMDY5MjAzNTJmZTI3OTE3MmEwZWVjZjgxMGU0NmI4NTAyYxRRfg==: --dhchap-ctrl-secret DHHC-1:03:YWQ1OTU2YWQ3ZTcwNDFlNTk0NWU3MWU3MTMxMzYyMmIwNWIwZjc3YmFiMjkwNjQ1ZmI4MmY4NzRlZTZlNzIyOGZgvG4=: 00:14:44.363 20:23:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:44.622 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:44.622 20:23:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:14:44.622 20:23:57 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:44.622 20:23:57 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:44.622 20:23:57 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:44.622 20:23:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:44.622 20:23:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:44.622 20:23:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:44.622 20:23:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:14:44.622 20:23:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:44.622 20:23:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:44.622 20:23:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:14:44.622 20:23:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:44.622 20:23:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:44.622 20:23:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:44.622 20:23:57 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:44.622 20:23:57 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:44.622 20:23:57 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:44.622 20:23:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:44.622 20:23:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:45.202 00:14:45.202 20:23:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:45.202 20:23:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:45.202 20:23:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:45.202 20:23:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:45.202 20:23:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:45.202 20:23:58 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:45.202 20:23:58 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:45.202 20:23:58 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:45.202 20:23:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:45.202 { 00:14:45.203 "cntlid": 35, 00:14:45.203 "qid": 0, 00:14:45.203 "state": "enabled", 00:14:45.203 "listen_address": { 00:14:45.203 "trtype": "RDMA", 00:14:45.203 "adrfam": "IPv4", 00:14:45.203 "traddr": "192.168.100.8", 00:14:45.203 "trsvcid": "4420" 00:14:45.203 }, 00:14:45.203 "peer_address": { 00:14:45.203 "trtype": "RDMA", 00:14:45.203 "adrfam": "IPv4", 00:14:45.203 "traddr": "192.168.100.8", 00:14:45.203 "trsvcid": "52724" 00:14:45.203 }, 00:14:45.203 "auth": { 00:14:45.203 "state": "completed", 00:14:45.203 "digest": "sha256", 00:14:45.203 "dhgroup": "ffdhe6144" 00:14:45.203 } 00:14:45.203 } 00:14:45.203 ]' 00:14:45.203 20:23:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:45.203 20:23:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:45.203 20:23:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:45.203 20:23:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:45.203 20:23:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:45.536 20:23:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:45.536 20:23:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:45.536 20:23:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:45.536 20:23:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:01:MjkxN2M0NmMwYzhmOGVmZWNlYjZiNTNlNzRkNWQ4NjEJtzcK: --dhchap-ctrl-secret DHHC-1:02:ZTQ1YTFjODRjNDA0NDc3YWFjNzI3NWNkYzRkZTk3NGJkMGFhZTBmZjNlZTE4NzE1y4GuKA==: 00:14:46.126 20:23:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:46.383 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:46.384 20:23:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:14:46.384 20:23:59 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:46.384 20:23:59 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:46.384 20:23:59 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:46.384 20:23:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:46.384 20:23:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:46.384 20:23:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:46.384 20:23:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:14:46.384 20:23:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:46.384 20:23:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:46.384 20:23:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:14:46.384 20:23:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:46.384 20:23:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:46.384 20:23:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:46.384 20:23:59 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:46.384 20:23:59 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:46.384 20:23:59 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:46.384 20:23:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:46.384 20:23:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:46.948 00:14:46.949 20:23:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:46.949 20:23:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:46.949 20:23:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:46.949 20:23:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:46.949 20:23:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:46.949 20:23:59 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:46.949 20:23:59 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:46.949 20:23:59 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:46.949 20:23:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:46.949 { 00:14:46.949 "cntlid": 37, 00:14:46.949 "qid": 0, 00:14:46.949 "state": "enabled", 00:14:46.949 "listen_address": { 00:14:46.949 "trtype": "RDMA", 00:14:46.949 "adrfam": "IPv4", 00:14:46.949 "traddr": "192.168.100.8", 00:14:46.949 "trsvcid": "4420" 00:14:46.949 }, 00:14:46.949 "peer_address": { 00:14:46.949 "trtype": "RDMA", 00:14:46.949 "adrfam": "IPv4", 00:14:46.949 "traddr": "192.168.100.8", 00:14:46.949 "trsvcid": "34815" 00:14:46.949 }, 00:14:46.949 "auth": { 00:14:46.949 "state": "completed", 00:14:46.949 "digest": "sha256", 00:14:46.949 "dhgroup": "ffdhe6144" 00:14:46.949 } 00:14:46.949 } 00:14:46.949 ]' 00:14:46.949 20:23:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:46.949 20:23:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:46.949 20:23:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:47.207 20:23:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:47.207 20:23:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:47.207 20:24:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:47.207 20:24:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:47.207 20:24:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:47.466 20:24:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:02:NTU4MTMyMjlkZTBkYWZjNTMyMWI1NDVkMDhlYjY4YjcyYWIwMDNmN2NiMDk0MmRm0wFoZA==: --dhchap-ctrl-secret DHHC-1:01:YmYzNzA3ZjY2NDhmMjlkOTY3MzE1NGRmNDMzZGYxMWbOAb25: 00:14:48.032 20:24:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:48.032 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:48.032 20:24:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:14:48.032 20:24:00 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:48.032 20:24:00 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:48.032 20:24:00 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:48.032 20:24:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:48.032 20:24:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:48.032 20:24:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:48.290 20:24:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:14:48.290 20:24:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:48.290 20:24:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:48.290 20:24:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:14:48.290 20:24:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:48.290 20:24:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:48.290 20:24:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key3 00:14:48.290 20:24:01 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:48.290 20:24:01 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:48.290 20:24:01 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:48.290 20:24:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:48.290 20:24:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:48.548 00:14:48.548 20:24:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:48.548 20:24:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:48.548 20:24:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:48.806 20:24:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:48.806 20:24:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:48.806 20:24:01 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:48.806 20:24:01 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:48.806 20:24:01 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:48.806 20:24:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:48.806 { 00:14:48.806 "cntlid": 39, 00:14:48.806 "qid": 0, 00:14:48.806 "state": "enabled", 00:14:48.806 "listen_address": { 00:14:48.806 "trtype": "RDMA", 00:14:48.806 "adrfam": "IPv4", 00:14:48.806 "traddr": "192.168.100.8", 00:14:48.806 "trsvcid": "4420" 00:14:48.806 }, 00:14:48.806 "peer_address": { 00:14:48.806 "trtype": "RDMA", 00:14:48.806 "adrfam": "IPv4", 00:14:48.806 "traddr": "192.168.100.8", 00:14:48.806 "trsvcid": "34330" 00:14:48.806 }, 00:14:48.806 "auth": { 00:14:48.806 "state": "completed", 00:14:48.806 "digest": "sha256", 00:14:48.806 "dhgroup": "ffdhe6144" 00:14:48.806 } 00:14:48.806 } 00:14:48.806 ]' 00:14:48.806 20:24:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:48.806 20:24:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:48.806 20:24:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:48.806 20:24:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:48.806 20:24:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:49.065 20:24:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:49.065 20:24:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:49.065 20:24:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:49.065 20:24:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:03:NTI1MTlmYjQwNDhmYTVhYmVkOGVjYTBlYjZlNWU1NDI2YjhkNjk2NjBhMTdlM2VhYmFlZmJmMWQwZDEwODhlNizRJFA=: 00:14:49.631 20:24:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:49.889 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:49.889 20:24:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:14:49.889 20:24:02 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:49.889 20:24:02 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:49.889 20:24:02 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:49.889 20:24:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:49.889 20:24:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:49.889 20:24:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:49.889 20:24:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:50.147 20:24:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:14:50.147 20:24:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:50.147 20:24:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:50.147 20:24:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:14:50.147 20:24:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:50.147 20:24:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:50.147 20:24:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:50.147 20:24:02 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:50.147 20:24:02 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:50.147 20:24:02 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:50.147 20:24:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:50.147 20:24:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:50.406 00:14:50.665 20:24:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:50.665 20:24:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:50.665 20:24:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:50.665 20:24:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:50.665 20:24:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:50.665 20:24:03 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:50.665 20:24:03 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:50.665 20:24:03 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:50.665 20:24:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:50.665 { 00:14:50.665 "cntlid": 41, 00:14:50.665 "qid": 0, 00:14:50.665 "state": "enabled", 00:14:50.665 "listen_address": { 00:14:50.665 "trtype": "RDMA", 00:14:50.665 "adrfam": "IPv4", 00:14:50.665 "traddr": "192.168.100.8", 00:14:50.665 "trsvcid": "4420" 00:14:50.665 }, 00:14:50.665 "peer_address": { 00:14:50.665 "trtype": "RDMA", 00:14:50.665 "adrfam": "IPv4", 00:14:50.665 "traddr": "192.168.100.8", 00:14:50.665 "trsvcid": "60528" 00:14:50.665 }, 00:14:50.665 "auth": { 00:14:50.665 "state": "completed", 00:14:50.665 "digest": "sha256", 00:14:50.665 "dhgroup": "ffdhe8192" 00:14:50.665 } 00:14:50.665 } 00:14:50.665 ]' 00:14:50.665 20:24:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:50.665 20:24:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:50.665 20:24:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:50.923 20:24:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:50.923 20:24:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:50.923 20:24:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:50.923 20:24:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:50.923 20:24:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:50.923 20:24:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:00:ZWFkZmI0NTkzNzAwZDdkMDY5MjAzNTJmZTI3OTE3MmEwZWVjZjgxMGU0NmI4NTAyYxRRfg==: --dhchap-ctrl-secret DHHC-1:03:YWQ1OTU2YWQ3ZTcwNDFlNTk0NWU3MWU3MTMxMzYyMmIwNWIwZjc3YmFiMjkwNjQ1ZmI4MmY4NzRlZTZlNzIyOGZgvG4=: 00:14:51.857 20:24:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:51.857 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:51.857 20:24:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:14:51.857 20:24:04 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:51.857 20:24:04 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:51.857 20:24:04 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:51.857 20:24:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:51.857 20:24:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:51.857 20:24:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:51.857 20:24:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:14:51.857 20:24:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:51.857 20:24:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:51.857 20:24:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:14:51.857 20:24:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:51.857 20:24:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:51.857 20:24:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:51.857 20:24:04 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:51.857 20:24:04 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:51.857 20:24:04 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:51.857 20:24:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:51.857 20:24:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:52.424 00:14:52.424 20:24:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:52.424 20:24:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:52.424 20:24:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:52.683 20:24:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:52.683 20:24:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:52.683 20:24:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:52.683 20:24:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:52.683 20:24:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:52.683 20:24:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:52.683 { 00:14:52.683 "cntlid": 43, 00:14:52.683 "qid": 0, 00:14:52.683 "state": "enabled", 00:14:52.683 "listen_address": { 00:14:52.683 "trtype": "RDMA", 00:14:52.683 "adrfam": "IPv4", 00:14:52.683 "traddr": "192.168.100.8", 00:14:52.683 "trsvcid": "4420" 00:14:52.683 }, 00:14:52.683 "peer_address": { 00:14:52.683 "trtype": "RDMA", 00:14:52.683 "adrfam": "IPv4", 00:14:52.683 "traddr": "192.168.100.8", 00:14:52.683 "trsvcid": "40508" 00:14:52.683 }, 00:14:52.683 "auth": { 00:14:52.683 "state": "completed", 00:14:52.683 "digest": "sha256", 00:14:52.683 "dhgroup": "ffdhe8192" 00:14:52.683 } 00:14:52.683 } 00:14:52.683 ]' 00:14:52.683 20:24:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:52.683 20:24:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:52.683 20:24:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:52.683 20:24:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:52.683 20:24:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:52.683 20:24:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:52.683 20:24:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:52.683 20:24:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:52.942 20:24:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:01:MjkxN2M0NmMwYzhmOGVmZWNlYjZiNTNlNzRkNWQ4NjEJtzcK: --dhchap-ctrl-secret DHHC-1:02:ZTQ1YTFjODRjNDA0NDc3YWFjNzI3NWNkYzRkZTk3NGJkMGFhZTBmZjNlZTE4NzE1y4GuKA==: 00:14:53.510 20:24:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:53.770 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:53.770 20:24:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:14:53.770 20:24:06 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:53.770 20:24:06 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:53.770 20:24:06 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:53.770 20:24:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:53.770 20:24:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:53.770 20:24:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:53.770 20:24:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:14:53.770 20:24:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:53.770 20:24:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:53.770 20:24:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:14:53.770 20:24:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:53.770 20:24:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:53.770 20:24:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:53.770 20:24:06 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:53.770 20:24:06 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:53.770 20:24:06 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:53.770 20:24:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:53.770 20:24:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:54.338 00:14:54.338 20:24:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:54.338 20:24:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:54.338 20:24:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:54.597 20:24:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:54.597 20:24:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:54.597 20:24:07 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:54.597 20:24:07 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:54.597 20:24:07 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:54.597 20:24:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:54.597 { 00:14:54.597 "cntlid": 45, 00:14:54.597 "qid": 0, 00:14:54.597 "state": "enabled", 00:14:54.597 "listen_address": { 00:14:54.597 "trtype": "RDMA", 00:14:54.597 "adrfam": "IPv4", 00:14:54.597 "traddr": "192.168.100.8", 00:14:54.597 "trsvcid": "4420" 00:14:54.597 }, 00:14:54.597 "peer_address": { 00:14:54.597 "trtype": "RDMA", 00:14:54.597 "adrfam": "IPv4", 00:14:54.597 "traddr": "192.168.100.8", 00:14:54.597 "trsvcid": "36875" 00:14:54.597 }, 00:14:54.597 "auth": { 00:14:54.597 "state": "completed", 00:14:54.597 "digest": "sha256", 00:14:54.597 "dhgroup": "ffdhe8192" 00:14:54.597 } 00:14:54.597 } 00:14:54.597 ]' 00:14:54.597 20:24:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:54.597 20:24:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:54.597 20:24:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:54.597 20:24:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:54.597 20:24:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:54.597 20:24:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:54.597 20:24:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:54.597 20:24:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:54.856 20:24:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:02:NTU4MTMyMjlkZTBkYWZjNTMyMWI1NDVkMDhlYjY4YjcyYWIwMDNmN2NiMDk0MmRm0wFoZA==: --dhchap-ctrl-secret DHHC-1:01:YmYzNzA3ZjY2NDhmMjlkOTY3MzE1NGRmNDMzZGYxMWbOAb25: 00:14:55.423 20:24:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:55.683 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:55.683 20:24:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:14:55.683 20:24:08 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:55.683 20:24:08 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:55.683 20:24:08 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:55.683 20:24:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:55.683 20:24:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:55.683 20:24:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:55.683 20:24:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:14:55.683 20:24:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:55.683 20:24:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:55.683 20:24:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:14:55.683 20:24:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:55.683 20:24:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:55.683 20:24:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key3 00:14:55.683 20:24:08 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:55.683 20:24:08 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:55.683 20:24:08 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:55.683 20:24:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:55.683 20:24:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:56.251 00:14:56.251 20:24:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:56.251 20:24:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:56.251 20:24:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:56.510 20:24:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:56.510 20:24:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:56.510 20:24:09 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:56.510 20:24:09 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:56.510 20:24:09 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:56.510 20:24:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:56.510 { 00:14:56.510 "cntlid": 47, 00:14:56.510 "qid": 0, 00:14:56.510 "state": "enabled", 00:14:56.510 "listen_address": { 00:14:56.510 "trtype": "RDMA", 00:14:56.510 "adrfam": "IPv4", 00:14:56.510 "traddr": "192.168.100.8", 00:14:56.510 "trsvcid": "4420" 00:14:56.510 }, 00:14:56.510 "peer_address": { 00:14:56.510 "trtype": "RDMA", 00:14:56.510 "adrfam": "IPv4", 00:14:56.510 "traddr": "192.168.100.8", 00:14:56.510 "trsvcid": "38974" 00:14:56.510 }, 00:14:56.510 "auth": { 00:14:56.510 "state": "completed", 00:14:56.510 "digest": "sha256", 00:14:56.510 "dhgroup": "ffdhe8192" 00:14:56.510 } 00:14:56.510 } 00:14:56.510 ]' 00:14:56.510 20:24:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:56.510 20:24:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:56.510 20:24:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:56.510 20:24:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:56.510 20:24:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:56.510 20:24:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:56.510 20:24:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:56.510 20:24:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:56.769 20:24:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:03:NTI1MTlmYjQwNDhmYTVhYmVkOGVjYTBlYjZlNWU1NDI2YjhkNjk2NjBhMTdlM2VhYmFlZmJmMWQwZDEwODhlNizRJFA=: 00:14:57.336 20:24:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:57.336 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:57.336 20:24:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:14:57.336 20:24:10 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:57.336 20:24:10 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:57.336 20:24:10 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:57.336 20:24:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:14:57.336 20:24:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:57.336 20:24:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:57.336 20:24:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:57.336 20:24:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:57.596 20:24:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:14:57.596 20:24:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:57.596 20:24:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:57.596 20:24:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:14:57.596 20:24:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:57.596 20:24:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:57.596 20:24:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:57.596 20:24:10 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:57.596 20:24:10 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:57.596 20:24:10 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:57.596 20:24:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:57.596 20:24:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:57.855 00:14:57.855 20:24:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:57.855 20:24:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:57.855 20:24:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:58.114 20:24:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:58.114 20:24:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:58.114 20:24:10 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:58.114 20:24:10 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:58.114 20:24:10 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:58.114 20:24:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:58.114 { 00:14:58.114 "cntlid": 49, 00:14:58.114 "qid": 0, 00:14:58.114 "state": "enabled", 00:14:58.114 "listen_address": { 00:14:58.114 "trtype": "RDMA", 00:14:58.114 "adrfam": "IPv4", 00:14:58.114 "traddr": "192.168.100.8", 00:14:58.114 "trsvcid": "4420" 00:14:58.114 }, 00:14:58.114 "peer_address": { 00:14:58.114 "trtype": "RDMA", 00:14:58.114 "adrfam": "IPv4", 00:14:58.114 "traddr": "192.168.100.8", 00:14:58.114 "trsvcid": "58225" 00:14:58.114 }, 00:14:58.114 "auth": { 00:14:58.114 "state": "completed", 00:14:58.114 "digest": "sha384", 00:14:58.114 "dhgroup": "null" 00:14:58.114 } 00:14:58.114 } 00:14:58.114 ]' 00:14:58.114 20:24:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:58.114 20:24:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:58.114 20:24:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:58.114 20:24:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:14:58.114 20:24:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:58.114 20:24:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:58.114 20:24:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:58.114 20:24:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:58.373 20:24:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:00:ZWFkZmI0NTkzNzAwZDdkMDY5MjAzNTJmZTI3OTE3MmEwZWVjZjgxMGU0NmI4NTAyYxRRfg==: --dhchap-ctrl-secret DHHC-1:03:YWQ1OTU2YWQ3ZTcwNDFlNTk0NWU3MWU3MTMxMzYyMmIwNWIwZjc3YmFiMjkwNjQ1ZmI4MmY4NzRlZTZlNzIyOGZgvG4=: 00:14:58.941 20:24:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:58.941 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:58.941 20:24:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:14:58.941 20:24:11 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:58.941 20:24:11 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:59.200 20:24:11 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:59.200 20:24:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:59.200 20:24:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:59.200 20:24:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:59.200 20:24:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:14:59.200 20:24:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:59.200 20:24:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:59.200 20:24:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:14:59.200 20:24:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:59.200 20:24:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:59.200 20:24:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:59.200 20:24:12 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:59.200 20:24:12 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:59.200 20:24:12 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:59.200 20:24:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:59.200 20:24:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:59.459 00:14:59.459 20:24:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:59.459 20:24:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:59.459 20:24:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:59.718 20:24:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:59.718 20:24:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:59.718 20:24:12 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:59.718 20:24:12 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:59.718 20:24:12 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:59.718 20:24:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:59.718 { 00:14:59.718 "cntlid": 51, 00:14:59.718 "qid": 0, 00:14:59.718 "state": "enabled", 00:14:59.718 "listen_address": { 00:14:59.718 "trtype": "RDMA", 00:14:59.718 "adrfam": "IPv4", 00:14:59.718 "traddr": "192.168.100.8", 00:14:59.718 "trsvcid": "4420" 00:14:59.718 }, 00:14:59.718 "peer_address": { 00:14:59.718 "trtype": "RDMA", 00:14:59.718 "adrfam": "IPv4", 00:14:59.718 "traddr": "192.168.100.8", 00:14:59.718 "trsvcid": "44282" 00:14:59.718 }, 00:14:59.718 "auth": { 00:14:59.718 "state": "completed", 00:14:59.718 "digest": "sha384", 00:14:59.718 "dhgroup": "null" 00:14:59.718 } 00:14:59.718 } 00:14:59.718 ]' 00:14:59.718 20:24:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:59.718 20:24:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:59.718 20:24:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:59.718 20:24:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:14:59.718 20:24:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:59.718 20:24:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:59.718 20:24:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:59.718 20:24:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:59.977 20:24:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:01:MjkxN2M0NmMwYzhmOGVmZWNlYjZiNTNlNzRkNWQ4NjEJtzcK: --dhchap-ctrl-secret DHHC-1:02:ZTQ1YTFjODRjNDA0NDc3YWFjNzI3NWNkYzRkZTk3NGJkMGFhZTBmZjNlZTE4NzE1y4GuKA==: 00:15:00.544 20:24:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:00.544 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:00.544 20:24:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:15:00.544 20:24:13 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:00.544 20:24:13 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:00.544 20:24:13 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:00.544 20:24:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:00.544 20:24:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:00.544 20:24:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:00.802 20:24:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:15:00.802 20:24:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:00.802 20:24:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:00.802 20:24:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:00.802 20:24:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:00.802 20:24:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:00.802 20:24:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:00.802 20:24:13 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:00.802 20:24:13 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:00.802 20:24:13 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:00.802 20:24:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:00.802 20:24:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:01.061 00:15:01.061 20:24:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:01.061 20:24:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:01.061 20:24:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:01.319 20:24:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:01.319 20:24:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:01.319 20:24:14 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:01.319 20:24:14 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:01.319 20:24:14 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:01.319 20:24:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:01.319 { 00:15:01.319 "cntlid": 53, 00:15:01.319 "qid": 0, 00:15:01.320 "state": "enabled", 00:15:01.320 "listen_address": { 00:15:01.320 "trtype": "RDMA", 00:15:01.320 "adrfam": "IPv4", 00:15:01.320 "traddr": "192.168.100.8", 00:15:01.320 "trsvcid": "4420" 00:15:01.320 }, 00:15:01.320 "peer_address": { 00:15:01.320 "trtype": "RDMA", 00:15:01.320 "adrfam": "IPv4", 00:15:01.320 "traddr": "192.168.100.8", 00:15:01.320 "trsvcid": "55904" 00:15:01.320 }, 00:15:01.320 "auth": { 00:15:01.320 "state": "completed", 00:15:01.320 "digest": "sha384", 00:15:01.320 "dhgroup": "null" 00:15:01.320 } 00:15:01.320 } 00:15:01.320 ]' 00:15:01.320 20:24:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:01.320 20:24:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:01.320 20:24:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:01.320 20:24:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:15:01.320 20:24:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:01.320 20:24:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:01.320 20:24:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:01.320 20:24:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:01.579 20:24:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:02:NTU4MTMyMjlkZTBkYWZjNTMyMWI1NDVkMDhlYjY4YjcyYWIwMDNmN2NiMDk0MmRm0wFoZA==: --dhchap-ctrl-secret DHHC-1:01:YmYzNzA3ZjY2NDhmMjlkOTY3MzE1NGRmNDMzZGYxMWbOAb25: 00:15:02.147 20:24:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:02.147 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:02.147 20:24:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:15:02.147 20:24:15 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:02.147 20:24:15 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:02.406 20:24:15 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:02.406 20:24:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:02.406 20:24:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:02.406 20:24:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:02.406 20:24:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:15:02.407 20:24:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:02.407 20:24:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:02.407 20:24:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:02.407 20:24:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:02.407 20:24:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:02.407 20:24:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key3 00:15:02.407 20:24:15 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:02.407 20:24:15 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:02.407 20:24:15 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:02.407 20:24:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:02.407 20:24:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:02.665 00:15:02.665 20:24:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:02.666 20:24:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:02.666 20:24:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:02.924 20:24:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:02.925 20:24:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:02.925 20:24:15 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:02.925 20:24:15 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:02.925 20:24:15 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:02.925 20:24:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:02.925 { 00:15:02.925 "cntlid": 55, 00:15:02.925 "qid": 0, 00:15:02.925 "state": "enabled", 00:15:02.925 "listen_address": { 00:15:02.925 "trtype": "RDMA", 00:15:02.925 "adrfam": "IPv4", 00:15:02.925 "traddr": "192.168.100.8", 00:15:02.925 "trsvcid": "4420" 00:15:02.925 }, 00:15:02.925 "peer_address": { 00:15:02.925 "trtype": "RDMA", 00:15:02.925 "adrfam": "IPv4", 00:15:02.925 "traddr": "192.168.100.8", 00:15:02.925 "trsvcid": "56372" 00:15:02.925 }, 00:15:02.925 "auth": { 00:15:02.925 "state": "completed", 00:15:02.925 "digest": "sha384", 00:15:02.925 "dhgroup": "null" 00:15:02.925 } 00:15:02.925 } 00:15:02.925 ]' 00:15:02.925 20:24:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:02.925 20:24:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:02.925 20:24:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:02.925 20:24:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:15:02.925 20:24:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:02.925 20:24:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:02.925 20:24:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:02.925 20:24:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:03.185 20:24:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:03:NTI1MTlmYjQwNDhmYTVhYmVkOGVjYTBlYjZlNWU1NDI2YjhkNjk2NjBhMTdlM2VhYmFlZmJmMWQwZDEwODhlNizRJFA=: 00:15:03.753 20:24:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:04.012 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:04.012 20:24:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:15:04.012 20:24:16 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:04.012 20:24:16 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:04.012 20:24:16 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:04.012 20:24:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:04.012 20:24:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:04.012 20:24:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:04.012 20:24:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:04.012 20:24:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:15:04.012 20:24:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:04.012 20:24:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:04.012 20:24:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:15:04.012 20:24:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:04.012 20:24:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:04.012 20:24:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:04.012 20:24:16 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:04.012 20:24:16 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:04.012 20:24:16 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:04.012 20:24:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:04.012 20:24:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:04.271 00:15:04.271 20:24:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:04.271 20:24:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:04.271 20:24:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:04.530 20:24:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:04.530 20:24:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:04.530 20:24:17 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:04.530 20:24:17 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:04.530 20:24:17 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:04.530 20:24:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:04.530 { 00:15:04.530 "cntlid": 57, 00:15:04.530 "qid": 0, 00:15:04.530 "state": "enabled", 00:15:04.530 "listen_address": { 00:15:04.530 "trtype": "RDMA", 00:15:04.530 "adrfam": "IPv4", 00:15:04.530 "traddr": "192.168.100.8", 00:15:04.530 "trsvcid": "4420" 00:15:04.530 }, 00:15:04.530 "peer_address": { 00:15:04.530 "trtype": "RDMA", 00:15:04.530 "adrfam": "IPv4", 00:15:04.530 "traddr": "192.168.100.8", 00:15:04.530 "trsvcid": "36197" 00:15:04.530 }, 00:15:04.530 "auth": { 00:15:04.530 "state": "completed", 00:15:04.530 "digest": "sha384", 00:15:04.530 "dhgroup": "ffdhe2048" 00:15:04.530 } 00:15:04.530 } 00:15:04.530 ]' 00:15:04.530 20:24:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:04.530 20:24:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:04.530 20:24:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:04.530 20:24:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:04.530 20:24:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:04.530 20:24:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:04.530 20:24:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:04.530 20:24:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:04.789 20:24:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:00:ZWFkZmI0NTkzNzAwZDdkMDY5MjAzNTJmZTI3OTE3MmEwZWVjZjgxMGU0NmI4NTAyYxRRfg==: --dhchap-ctrl-secret DHHC-1:03:YWQ1OTU2YWQ3ZTcwNDFlNTk0NWU3MWU3MTMxMzYyMmIwNWIwZjc3YmFiMjkwNjQ1ZmI4MmY4NzRlZTZlNzIyOGZgvG4=: 00:15:05.357 20:24:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:05.616 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:05.616 20:24:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:15:05.617 20:24:18 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:05.617 20:24:18 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:05.617 20:24:18 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:05.617 20:24:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:05.617 20:24:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:05.617 20:24:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:05.617 20:24:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:15:05.617 20:24:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:05.617 20:24:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:05.617 20:24:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:15:05.617 20:24:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:05.617 20:24:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:05.876 20:24:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:05.876 20:24:18 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:05.876 20:24:18 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:05.876 20:24:18 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:05.876 20:24:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:05.876 20:24:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:05.876 00:15:05.876 20:24:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:05.876 20:24:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:05.876 20:24:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:06.135 20:24:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:06.135 20:24:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:06.135 20:24:19 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:06.135 20:24:19 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:06.135 20:24:19 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:06.135 20:24:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:06.135 { 00:15:06.135 "cntlid": 59, 00:15:06.135 "qid": 0, 00:15:06.135 "state": "enabled", 00:15:06.135 "listen_address": { 00:15:06.135 "trtype": "RDMA", 00:15:06.135 "adrfam": "IPv4", 00:15:06.135 "traddr": "192.168.100.8", 00:15:06.135 "trsvcid": "4420" 00:15:06.135 }, 00:15:06.135 "peer_address": { 00:15:06.135 "trtype": "RDMA", 00:15:06.135 "adrfam": "IPv4", 00:15:06.135 "traddr": "192.168.100.8", 00:15:06.136 "trsvcid": "48234" 00:15:06.136 }, 00:15:06.136 "auth": { 00:15:06.136 "state": "completed", 00:15:06.136 "digest": "sha384", 00:15:06.136 "dhgroup": "ffdhe2048" 00:15:06.136 } 00:15:06.136 } 00:15:06.136 ]' 00:15:06.136 20:24:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:06.136 20:24:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:06.136 20:24:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:06.136 20:24:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:06.136 20:24:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:06.394 20:24:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:06.394 20:24:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:06.394 20:24:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:06.394 20:24:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:01:MjkxN2M0NmMwYzhmOGVmZWNlYjZiNTNlNzRkNWQ4NjEJtzcK: --dhchap-ctrl-secret DHHC-1:02:ZTQ1YTFjODRjNDA0NDc3YWFjNzI3NWNkYzRkZTk3NGJkMGFhZTBmZjNlZTE4NzE1y4GuKA==: 00:15:07.338 20:24:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:07.338 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:07.338 20:24:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:15:07.338 20:24:20 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:07.338 20:24:20 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:07.338 20:24:20 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:07.338 20:24:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:07.338 20:24:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:07.338 20:24:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:07.338 20:24:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:15:07.338 20:24:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:07.338 20:24:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:07.338 20:24:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:15:07.338 20:24:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:07.338 20:24:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:07.338 20:24:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:07.338 20:24:20 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:07.338 20:24:20 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:07.338 20:24:20 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:07.338 20:24:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:07.338 20:24:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:07.597 00:15:07.597 20:24:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:07.597 20:24:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:07.597 20:24:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:07.856 20:24:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:07.856 20:24:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:07.856 20:24:20 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:07.856 20:24:20 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:07.856 20:24:20 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:07.856 20:24:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:07.856 { 00:15:07.856 "cntlid": 61, 00:15:07.856 "qid": 0, 00:15:07.856 "state": "enabled", 00:15:07.856 "listen_address": { 00:15:07.856 "trtype": "RDMA", 00:15:07.856 "adrfam": "IPv4", 00:15:07.856 "traddr": "192.168.100.8", 00:15:07.856 "trsvcid": "4420" 00:15:07.856 }, 00:15:07.856 "peer_address": { 00:15:07.856 "trtype": "RDMA", 00:15:07.856 "adrfam": "IPv4", 00:15:07.856 "traddr": "192.168.100.8", 00:15:07.856 "trsvcid": "36862" 00:15:07.856 }, 00:15:07.856 "auth": { 00:15:07.856 "state": "completed", 00:15:07.856 "digest": "sha384", 00:15:07.856 "dhgroup": "ffdhe2048" 00:15:07.856 } 00:15:07.856 } 00:15:07.856 ]' 00:15:07.856 20:24:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:07.856 20:24:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:07.856 20:24:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:07.856 20:24:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:07.856 20:24:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:07.856 20:24:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:07.856 20:24:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:07.856 20:24:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:08.115 20:24:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:02:NTU4MTMyMjlkZTBkYWZjNTMyMWI1NDVkMDhlYjY4YjcyYWIwMDNmN2NiMDk0MmRm0wFoZA==: --dhchap-ctrl-secret DHHC-1:01:YmYzNzA3ZjY2NDhmMjlkOTY3MzE1NGRmNDMzZGYxMWbOAb25: 00:15:08.681 20:24:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:08.940 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:08.940 20:24:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:15:08.941 20:24:21 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:08.941 20:24:21 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:08.941 20:24:21 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:08.941 20:24:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:08.941 20:24:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:08.941 20:24:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:08.941 20:24:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:15:08.941 20:24:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:08.941 20:24:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:08.941 20:24:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:15:08.941 20:24:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:08.941 20:24:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:08.941 20:24:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key3 00:15:08.941 20:24:21 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:08.941 20:24:21 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:08.941 20:24:21 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:08.941 20:24:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:08.941 20:24:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:09.200 00:15:09.200 20:24:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:09.200 20:24:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:09.200 20:24:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:09.459 20:24:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:09.459 20:24:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:09.459 20:24:22 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:09.459 20:24:22 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:09.459 20:24:22 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:09.459 20:24:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:09.459 { 00:15:09.459 "cntlid": 63, 00:15:09.459 "qid": 0, 00:15:09.459 "state": "enabled", 00:15:09.459 "listen_address": { 00:15:09.459 "trtype": "RDMA", 00:15:09.459 "adrfam": "IPv4", 00:15:09.459 "traddr": "192.168.100.8", 00:15:09.459 "trsvcid": "4420" 00:15:09.459 }, 00:15:09.459 "peer_address": { 00:15:09.459 "trtype": "RDMA", 00:15:09.459 "adrfam": "IPv4", 00:15:09.459 "traddr": "192.168.100.8", 00:15:09.459 "trsvcid": "46636" 00:15:09.459 }, 00:15:09.459 "auth": { 00:15:09.459 "state": "completed", 00:15:09.459 "digest": "sha384", 00:15:09.459 "dhgroup": "ffdhe2048" 00:15:09.459 } 00:15:09.459 } 00:15:09.459 ]' 00:15:09.459 20:24:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:09.459 20:24:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:09.459 20:24:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:09.459 20:24:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:09.459 20:24:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:09.718 20:24:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:09.718 20:24:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:09.718 20:24:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:09.718 20:24:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:03:NTI1MTlmYjQwNDhmYTVhYmVkOGVjYTBlYjZlNWU1NDI2YjhkNjk2NjBhMTdlM2VhYmFlZmJmMWQwZDEwODhlNizRJFA=: 00:15:10.653 20:24:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:10.653 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:10.653 20:24:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:15:10.653 20:24:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:10.653 20:24:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:10.653 20:24:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:10.653 20:24:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:10.653 20:24:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:10.653 20:24:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:10.653 20:24:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:10.653 20:24:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:15:10.653 20:24:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:10.653 20:24:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:10.653 20:24:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:15:10.653 20:24:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:10.653 20:24:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:10.653 20:24:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:10.653 20:24:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:10.653 20:24:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:10.653 20:24:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:10.653 20:24:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:10.653 20:24:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:10.912 00:15:10.912 20:24:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:10.912 20:24:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:10.912 20:24:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:11.170 20:24:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:11.170 20:24:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:11.170 20:24:24 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:11.170 20:24:24 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:11.170 20:24:24 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:11.170 20:24:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:11.170 { 00:15:11.170 "cntlid": 65, 00:15:11.170 "qid": 0, 00:15:11.170 "state": "enabled", 00:15:11.170 "listen_address": { 00:15:11.170 "trtype": "RDMA", 00:15:11.170 "adrfam": "IPv4", 00:15:11.170 "traddr": "192.168.100.8", 00:15:11.170 "trsvcid": "4420" 00:15:11.170 }, 00:15:11.170 "peer_address": { 00:15:11.170 "trtype": "RDMA", 00:15:11.170 "adrfam": "IPv4", 00:15:11.170 "traddr": "192.168.100.8", 00:15:11.170 "trsvcid": "58837" 00:15:11.170 }, 00:15:11.170 "auth": { 00:15:11.170 "state": "completed", 00:15:11.170 "digest": "sha384", 00:15:11.170 "dhgroup": "ffdhe3072" 00:15:11.170 } 00:15:11.170 } 00:15:11.170 ]' 00:15:11.170 20:24:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:11.170 20:24:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:11.170 20:24:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:11.170 20:24:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:11.170 20:24:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:11.429 20:24:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:11.429 20:24:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:11.429 20:24:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:11.429 20:24:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:00:ZWFkZmI0NTkzNzAwZDdkMDY5MjAzNTJmZTI3OTE3MmEwZWVjZjgxMGU0NmI4NTAyYxRRfg==: --dhchap-ctrl-secret DHHC-1:03:YWQ1OTU2YWQ3ZTcwNDFlNTk0NWU3MWU3MTMxMzYyMmIwNWIwZjc3YmFiMjkwNjQ1ZmI4MmY4NzRlZTZlNzIyOGZgvG4=: 00:15:12.062 20:24:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:12.369 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:12.369 20:24:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:15:12.370 20:24:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:12.370 20:24:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:12.370 20:24:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:12.370 20:24:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:12.370 20:24:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:12.370 20:24:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:12.370 20:24:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:15:12.370 20:24:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:12.370 20:24:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:12.370 20:24:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:15:12.370 20:24:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:12.370 20:24:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:12.370 20:24:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:12.370 20:24:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:12.370 20:24:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:12.370 20:24:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:12.370 20:24:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:12.370 20:24:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:12.629 00:15:12.629 20:24:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:12.629 20:24:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:12.629 20:24:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:12.889 20:24:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:12.889 20:24:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:12.889 20:24:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:12.889 20:24:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:12.889 20:24:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:12.889 20:24:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:12.889 { 00:15:12.889 "cntlid": 67, 00:15:12.889 "qid": 0, 00:15:12.889 "state": "enabled", 00:15:12.889 "listen_address": { 00:15:12.889 "trtype": "RDMA", 00:15:12.889 "adrfam": "IPv4", 00:15:12.889 "traddr": "192.168.100.8", 00:15:12.889 "trsvcid": "4420" 00:15:12.889 }, 00:15:12.889 "peer_address": { 00:15:12.889 "trtype": "RDMA", 00:15:12.889 "adrfam": "IPv4", 00:15:12.889 "traddr": "192.168.100.8", 00:15:12.889 "trsvcid": "54246" 00:15:12.889 }, 00:15:12.889 "auth": { 00:15:12.889 "state": "completed", 00:15:12.889 "digest": "sha384", 00:15:12.889 "dhgroup": "ffdhe3072" 00:15:12.889 } 00:15:12.889 } 00:15:12.889 ]' 00:15:12.889 20:24:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:12.889 20:24:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:12.889 20:24:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:12.889 20:24:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:12.889 20:24:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:12.889 20:24:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:12.889 20:24:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:12.889 20:24:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:13.149 20:24:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:01:MjkxN2M0NmMwYzhmOGVmZWNlYjZiNTNlNzRkNWQ4NjEJtzcK: --dhchap-ctrl-secret DHHC-1:02:ZTQ1YTFjODRjNDA0NDc3YWFjNzI3NWNkYzRkZTk3NGJkMGFhZTBmZjNlZTE4NzE1y4GuKA==: 00:15:13.717 20:24:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:13.976 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:13.976 20:24:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:15:13.976 20:24:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:13.976 20:24:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:13.976 20:24:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:13.976 20:24:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:13.976 20:24:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:13.976 20:24:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:13.976 20:24:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:15:13.976 20:24:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:13.976 20:24:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:13.976 20:24:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:15:13.976 20:24:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:13.977 20:24:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:13.977 20:24:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:13.977 20:24:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:13.977 20:24:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:13.977 20:24:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:13.977 20:24:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:13.977 20:24:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:14.236 00:15:14.236 20:24:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:14.236 20:24:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:14.236 20:24:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:14.495 20:24:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:14.495 20:24:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:14.495 20:24:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:14.495 20:24:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:14.495 20:24:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:14.495 20:24:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:14.495 { 00:15:14.495 "cntlid": 69, 00:15:14.495 "qid": 0, 00:15:14.495 "state": "enabled", 00:15:14.495 "listen_address": { 00:15:14.495 "trtype": "RDMA", 00:15:14.495 "adrfam": "IPv4", 00:15:14.495 "traddr": "192.168.100.8", 00:15:14.495 "trsvcid": "4420" 00:15:14.495 }, 00:15:14.495 "peer_address": { 00:15:14.495 "trtype": "RDMA", 00:15:14.495 "adrfam": "IPv4", 00:15:14.495 "traddr": "192.168.100.8", 00:15:14.495 "trsvcid": "44003" 00:15:14.495 }, 00:15:14.495 "auth": { 00:15:14.495 "state": "completed", 00:15:14.495 "digest": "sha384", 00:15:14.495 "dhgroup": "ffdhe3072" 00:15:14.495 } 00:15:14.495 } 00:15:14.495 ]' 00:15:14.495 20:24:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:14.495 20:24:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:14.495 20:24:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:14.495 20:24:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:14.495 20:24:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:14.495 20:24:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:14.495 20:24:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:14.495 20:24:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:14.755 20:24:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:02:NTU4MTMyMjlkZTBkYWZjNTMyMWI1NDVkMDhlYjY4YjcyYWIwMDNmN2NiMDk0MmRm0wFoZA==: --dhchap-ctrl-secret DHHC-1:01:YmYzNzA3ZjY2NDhmMjlkOTY3MzE1NGRmNDMzZGYxMWbOAb25: 00:15:15.323 20:24:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:15.582 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:15.582 20:24:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:15:15.582 20:24:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:15.582 20:24:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:15.582 20:24:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:15.582 20:24:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:15.582 20:24:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:15.582 20:24:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:15.840 20:24:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:15:15.840 20:24:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:15.840 20:24:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:15.840 20:24:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:15:15.840 20:24:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:15.840 20:24:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:15.840 20:24:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key3 00:15:15.840 20:24:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:15.841 20:24:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:15.841 20:24:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:15.841 20:24:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:15.841 20:24:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:15.841 00:15:16.100 20:24:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:16.100 20:24:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:16.100 20:24:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:16.100 20:24:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:16.100 20:24:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:16.100 20:24:29 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:16.100 20:24:29 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:16.100 20:24:29 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:16.100 20:24:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:16.100 { 00:15:16.100 "cntlid": 71, 00:15:16.100 "qid": 0, 00:15:16.100 "state": "enabled", 00:15:16.100 "listen_address": { 00:15:16.100 "trtype": "RDMA", 00:15:16.100 "adrfam": "IPv4", 00:15:16.100 "traddr": "192.168.100.8", 00:15:16.100 "trsvcid": "4420" 00:15:16.100 }, 00:15:16.100 "peer_address": { 00:15:16.100 "trtype": "RDMA", 00:15:16.100 "adrfam": "IPv4", 00:15:16.100 "traddr": "192.168.100.8", 00:15:16.100 "trsvcid": "54237" 00:15:16.100 }, 00:15:16.100 "auth": { 00:15:16.100 "state": "completed", 00:15:16.100 "digest": "sha384", 00:15:16.100 "dhgroup": "ffdhe3072" 00:15:16.100 } 00:15:16.100 } 00:15:16.100 ]' 00:15:16.100 20:24:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:16.100 20:24:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:16.100 20:24:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:16.359 20:24:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:16.359 20:24:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:16.359 20:24:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:16.359 20:24:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:16.359 20:24:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:16.359 20:24:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:03:NTI1MTlmYjQwNDhmYTVhYmVkOGVjYTBlYjZlNWU1NDI2YjhkNjk2NjBhMTdlM2VhYmFlZmJmMWQwZDEwODhlNizRJFA=: 00:15:17.294 20:24:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:17.294 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:17.294 20:24:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:15:17.294 20:24:30 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:17.294 20:24:30 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:17.294 20:24:30 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:17.294 20:24:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:17.294 20:24:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:17.294 20:24:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:17.294 20:24:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:17.294 20:24:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:15:17.294 20:24:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:17.294 20:24:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:17.294 20:24:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:15:17.294 20:24:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:17.294 20:24:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:17.294 20:24:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:17.294 20:24:30 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:17.294 20:24:30 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:17.294 20:24:30 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:17.295 20:24:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:17.295 20:24:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:17.553 00:15:17.553 20:24:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:17.553 20:24:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:17.553 20:24:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:17.812 20:24:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:17.812 20:24:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:17.812 20:24:30 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:17.812 20:24:30 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:17.812 20:24:30 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:17.812 20:24:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:17.812 { 00:15:17.812 "cntlid": 73, 00:15:17.812 "qid": 0, 00:15:17.812 "state": "enabled", 00:15:17.812 "listen_address": { 00:15:17.812 "trtype": "RDMA", 00:15:17.812 "adrfam": "IPv4", 00:15:17.812 "traddr": "192.168.100.8", 00:15:17.812 "trsvcid": "4420" 00:15:17.812 }, 00:15:17.812 "peer_address": { 00:15:17.812 "trtype": "RDMA", 00:15:17.812 "adrfam": "IPv4", 00:15:17.812 "traddr": "192.168.100.8", 00:15:17.812 "trsvcid": "60522" 00:15:17.812 }, 00:15:17.812 "auth": { 00:15:17.812 "state": "completed", 00:15:17.812 "digest": "sha384", 00:15:17.812 "dhgroup": "ffdhe4096" 00:15:17.812 } 00:15:17.812 } 00:15:17.812 ]' 00:15:17.812 20:24:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:17.812 20:24:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:17.812 20:24:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:18.071 20:24:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:18.071 20:24:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:18.071 20:24:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:18.071 20:24:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:18.071 20:24:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:18.071 20:24:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:00:ZWFkZmI0NTkzNzAwZDdkMDY5MjAzNTJmZTI3OTE3MmEwZWVjZjgxMGU0NmI4NTAyYxRRfg==: --dhchap-ctrl-secret DHHC-1:03:YWQ1OTU2YWQ3ZTcwNDFlNTk0NWU3MWU3MTMxMzYyMmIwNWIwZjc3YmFiMjkwNjQ1ZmI4MmY4NzRlZTZlNzIyOGZgvG4=: 00:15:19.006 20:24:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:19.006 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:19.006 20:24:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:15:19.006 20:24:31 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:19.006 20:24:31 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:19.006 20:24:31 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:19.006 20:24:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:19.006 20:24:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:19.006 20:24:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:19.006 20:24:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:15:19.006 20:24:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:19.007 20:24:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:19.007 20:24:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:15:19.007 20:24:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:19.007 20:24:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:19.007 20:24:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:19.007 20:24:31 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:19.007 20:24:31 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:19.007 20:24:31 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:19.007 20:24:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:19.007 20:24:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:19.265 00:15:19.265 20:24:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:19.265 20:24:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:19.265 20:24:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:19.524 20:24:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:19.524 20:24:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:19.524 20:24:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:19.524 20:24:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:19.524 20:24:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:19.524 20:24:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:19.524 { 00:15:19.524 "cntlid": 75, 00:15:19.524 "qid": 0, 00:15:19.524 "state": "enabled", 00:15:19.525 "listen_address": { 00:15:19.525 "trtype": "RDMA", 00:15:19.525 "adrfam": "IPv4", 00:15:19.525 "traddr": "192.168.100.8", 00:15:19.525 "trsvcid": "4420" 00:15:19.525 }, 00:15:19.525 "peer_address": { 00:15:19.525 "trtype": "RDMA", 00:15:19.525 "adrfam": "IPv4", 00:15:19.525 "traddr": "192.168.100.8", 00:15:19.525 "trsvcid": "54145" 00:15:19.525 }, 00:15:19.525 "auth": { 00:15:19.525 "state": "completed", 00:15:19.525 "digest": "sha384", 00:15:19.525 "dhgroup": "ffdhe4096" 00:15:19.525 } 00:15:19.525 } 00:15:19.525 ]' 00:15:19.525 20:24:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:19.525 20:24:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:19.525 20:24:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:19.525 20:24:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:19.525 20:24:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:19.783 20:24:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:19.783 20:24:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:19.783 20:24:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:19.783 20:24:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:01:MjkxN2M0NmMwYzhmOGVmZWNlYjZiNTNlNzRkNWQ4NjEJtzcK: --dhchap-ctrl-secret DHHC-1:02:ZTQ1YTFjODRjNDA0NDc3YWFjNzI3NWNkYzRkZTk3NGJkMGFhZTBmZjNlZTE4NzE1y4GuKA==: 00:15:20.351 20:24:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:20.609 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:20.609 20:24:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:15:20.609 20:24:33 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:20.609 20:24:33 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:20.609 20:24:33 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:20.609 20:24:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:20.609 20:24:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:20.610 20:24:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:20.868 20:24:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:15:20.868 20:24:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:20.868 20:24:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:20.868 20:24:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:15:20.868 20:24:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:20.868 20:24:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:20.868 20:24:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:20.868 20:24:33 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:20.868 20:24:33 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:20.868 20:24:33 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:20.868 20:24:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:20.868 20:24:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:21.126 00:15:21.126 20:24:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:21.126 20:24:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:21.126 20:24:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:21.126 20:24:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:21.126 20:24:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:21.126 20:24:34 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:21.126 20:24:34 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:21.126 20:24:34 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:21.126 20:24:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:21.126 { 00:15:21.126 "cntlid": 77, 00:15:21.126 "qid": 0, 00:15:21.126 "state": "enabled", 00:15:21.126 "listen_address": { 00:15:21.126 "trtype": "RDMA", 00:15:21.126 "adrfam": "IPv4", 00:15:21.126 "traddr": "192.168.100.8", 00:15:21.126 "trsvcid": "4420" 00:15:21.126 }, 00:15:21.126 "peer_address": { 00:15:21.126 "trtype": "RDMA", 00:15:21.126 "adrfam": "IPv4", 00:15:21.126 "traddr": "192.168.100.8", 00:15:21.126 "trsvcid": "41730" 00:15:21.126 }, 00:15:21.126 "auth": { 00:15:21.126 "state": "completed", 00:15:21.126 "digest": "sha384", 00:15:21.126 "dhgroup": "ffdhe4096" 00:15:21.126 } 00:15:21.126 } 00:15:21.126 ]' 00:15:21.126 20:24:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:21.384 20:24:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:21.384 20:24:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:21.384 20:24:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:21.384 20:24:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:21.384 20:24:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:21.384 20:24:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:21.384 20:24:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:21.643 20:24:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:02:NTU4MTMyMjlkZTBkYWZjNTMyMWI1NDVkMDhlYjY4YjcyYWIwMDNmN2NiMDk0MmRm0wFoZA==: --dhchap-ctrl-secret DHHC-1:01:YmYzNzA3ZjY2NDhmMjlkOTY3MzE1NGRmNDMzZGYxMWbOAb25: 00:15:22.210 20:24:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:22.210 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:22.210 20:24:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:15:22.210 20:24:35 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:22.210 20:24:35 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:22.210 20:24:35 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:22.210 20:24:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:22.210 20:24:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:22.210 20:24:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:22.469 20:24:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:15:22.469 20:24:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:22.469 20:24:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:22.469 20:24:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:15:22.469 20:24:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:22.469 20:24:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:22.469 20:24:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key3 00:15:22.469 20:24:35 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:22.469 20:24:35 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:22.469 20:24:35 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:22.469 20:24:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:22.469 20:24:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:22.727 00:15:22.727 20:24:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:22.727 20:24:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:22.727 20:24:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:22.986 20:24:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:22.986 20:24:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:22.986 20:24:35 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:22.986 20:24:35 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:22.986 20:24:35 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:22.986 20:24:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:22.986 { 00:15:22.986 "cntlid": 79, 00:15:22.986 "qid": 0, 00:15:22.986 "state": "enabled", 00:15:22.986 "listen_address": { 00:15:22.986 "trtype": "RDMA", 00:15:22.986 "adrfam": "IPv4", 00:15:22.986 "traddr": "192.168.100.8", 00:15:22.986 "trsvcid": "4420" 00:15:22.986 }, 00:15:22.986 "peer_address": { 00:15:22.986 "trtype": "RDMA", 00:15:22.986 "adrfam": "IPv4", 00:15:22.986 "traddr": "192.168.100.8", 00:15:22.986 "trsvcid": "56620" 00:15:22.986 }, 00:15:22.986 "auth": { 00:15:22.986 "state": "completed", 00:15:22.986 "digest": "sha384", 00:15:22.986 "dhgroup": "ffdhe4096" 00:15:22.986 } 00:15:22.986 } 00:15:22.986 ]' 00:15:22.986 20:24:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:22.986 20:24:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:22.986 20:24:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:22.986 20:24:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:22.986 20:24:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:22.986 20:24:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:22.986 20:24:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:22.986 20:24:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:23.245 20:24:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:03:NTI1MTlmYjQwNDhmYTVhYmVkOGVjYTBlYjZlNWU1NDI2YjhkNjk2NjBhMTdlM2VhYmFlZmJmMWQwZDEwODhlNizRJFA=: 00:15:23.813 20:24:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:23.813 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:23.813 20:24:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:15:23.813 20:24:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:23.813 20:24:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:23.813 20:24:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:23.813 20:24:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:23.813 20:24:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:23.813 20:24:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:23.813 20:24:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:24.071 20:24:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:15:24.071 20:24:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:24.071 20:24:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:24.071 20:24:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:15:24.071 20:24:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:24.072 20:24:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:24.072 20:24:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:24.072 20:24:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:24.072 20:24:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:24.072 20:24:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:24.072 20:24:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:24.072 20:24:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:24.331 00:15:24.590 20:24:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:24.590 20:24:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:24.590 20:24:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:24.590 20:24:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:24.591 20:24:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:24.591 20:24:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:24.591 20:24:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:24.591 20:24:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:24.591 20:24:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:24.591 { 00:15:24.591 "cntlid": 81, 00:15:24.591 "qid": 0, 00:15:24.591 "state": "enabled", 00:15:24.591 "listen_address": { 00:15:24.591 "trtype": "RDMA", 00:15:24.591 "adrfam": "IPv4", 00:15:24.591 "traddr": "192.168.100.8", 00:15:24.591 "trsvcid": "4420" 00:15:24.591 }, 00:15:24.591 "peer_address": { 00:15:24.591 "trtype": "RDMA", 00:15:24.591 "adrfam": "IPv4", 00:15:24.591 "traddr": "192.168.100.8", 00:15:24.591 "trsvcid": "45744" 00:15:24.591 }, 00:15:24.591 "auth": { 00:15:24.591 "state": "completed", 00:15:24.591 "digest": "sha384", 00:15:24.591 "dhgroup": "ffdhe6144" 00:15:24.591 } 00:15:24.591 } 00:15:24.591 ]' 00:15:24.591 20:24:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:24.591 20:24:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:24.591 20:24:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:24.849 20:24:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:24.849 20:24:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:24.849 20:24:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:24.849 20:24:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:24.849 20:24:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:24.850 20:24:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:00:ZWFkZmI0NTkzNzAwZDdkMDY5MjAzNTJmZTI3OTE3MmEwZWVjZjgxMGU0NmI4NTAyYxRRfg==: --dhchap-ctrl-secret DHHC-1:03:YWQ1OTU2YWQ3ZTcwNDFlNTk0NWU3MWU3MTMxMzYyMmIwNWIwZjc3YmFiMjkwNjQ1ZmI4MmY4NzRlZTZlNzIyOGZgvG4=: 00:15:25.787 20:24:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:25.787 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:25.787 20:24:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:15:25.787 20:24:38 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:25.787 20:24:38 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:25.787 20:24:38 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:25.787 20:24:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:25.787 20:24:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:25.787 20:24:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:25.787 20:24:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:15:25.787 20:24:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:25.787 20:24:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:25.787 20:24:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:15:25.787 20:24:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:25.787 20:24:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:25.787 20:24:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:25.787 20:24:38 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:25.787 20:24:38 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:25.788 20:24:38 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:25.788 20:24:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:25.788 20:24:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:26.356 00:15:26.356 20:24:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:26.356 20:24:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:26.356 20:24:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:26.356 20:24:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:26.356 20:24:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:26.356 20:24:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:26.356 20:24:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:26.356 20:24:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:26.356 20:24:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:26.356 { 00:15:26.356 "cntlid": 83, 00:15:26.356 "qid": 0, 00:15:26.356 "state": "enabled", 00:15:26.356 "listen_address": { 00:15:26.356 "trtype": "RDMA", 00:15:26.356 "adrfam": "IPv4", 00:15:26.356 "traddr": "192.168.100.8", 00:15:26.356 "trsvcid": "4420" 00:15:26.356 }, 00:15:26.356 "peer_address": { 00:15:26.356 "trtype": "RDMA", 00:15:26.356 "adrfam": "IPv4", 00:15:26.356 "traddr": "192.168.100.8", 00:15:26.356 "trsvcid": "58053" 00:15:26.356 }, 00:15:26.356 "auth": { 00:15:26.356 "state": "completed", 00:15:26.356 "digest": "sha384", 00:15:26.356 "dhgroup": "ffdhe6144" 00:15:26.356 } 00:15:26.356 } 00:15:26.356 ]' 00:15:26.356 20:24:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:26.356 20:24:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:26.356 20:24:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:26.615 20:24:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:26.615 20:24:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:26.615 20:24:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:26.615 20:24:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:26.615 20:24:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:26.615 20:24:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:01:MjkxN2M0NmMwYzhmOGVmZWNlYjZiNTNlNzRkNWQ4NjEJtzcK: --dhchap-ctrl-secret DHHC-1:02:ZTQ1YTFjODRjNDA0NDc3YWFjNzI3NWNkYzRkZTk3NGJkMGFhZTBmZjNlZTE4NzE1y4GuKA==: 00:15:27.551 20:24:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:27.551 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:27.551 20:24:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:15:27.551 20:24:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:27.551 20:24:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:27.551 20:24:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:27.551 20:24:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:27.551 20:24:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:27.551 20:24:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:27.551 20:24:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:15:27.551 20:24:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:27.551 20:24:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:27.551 20:24:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:15:27.551 20:24:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:27.551 20:24:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:27.551 20:24:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:27.551 20:24:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:27.551 20:24:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:27.551 20:24:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:27.551 20:24:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:27.551 20:24:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:28.120 00:15:28.120 20:24:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:28.120 20:24:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:28.120 20:24:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:28.120 20:24:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:28.120 20:24:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:28.120 20:24:41 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:28.120 20:24:41 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:28.120 20:24:41 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:28.120 20:24:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:28.120 { 00:15:28.120 "cntlid": 85, 00:15:28.120 "qid": 0, 00:15:28.120 "state": "enabled", 00:15:28.120 "listen_address": { 00:15:28.120 "trtype": "RDMA", 00:15:28.120 "adrfam": "IPv4", 00:15:28.120 "traddr": "192.168.100.8", 00:15:28.120 "trsvcid": "4420" 00:15:28.120 }, 00:15:28.120 "peer_address": { 00:15:28.120 "trtype": "RDMA", 00:15:28.120 "adrfam": "IPv4", 00:15:28.120 "traddr": "192.168.100.8", 00:15:28.120 "trsvcid": "38169" 00:15:28.120 }, 00:15:28.120 "auth": { 00:15:28.120 "state": "completed", 00:15:28.120 "digest": "sha384", 00:15:28.120 "dhgroup": "ffdhe6144" 00:15:28.120 } 00:15:28.120 } 00:15:28.120 ]' 00:15:28.120 20:24:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:28.120 20:24:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:28.120 20:24:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:28.120 20:24:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:28.120 20:24:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:28.378 20:24:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:28.378 20:24:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:28.379 20:24:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:28.379 20:24:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:02:NTU4MTMyMjlkZTBkYWZjNTMyMWI1NDVkMDhlYjY4YjcyYWIwMDNmN2NiMDk0MmRm0wFoZA==: --dhchap-ctrl-secret DHHC-1:01:YmYzNzA3ZjY2NDhmMjlkOTY3MzE1NGRmNDMzZGYxMWbOAb25: 00:15:28.945 20:24:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:29.204 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:29.204 20:24:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:15:29.204 20:24:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:29.204 20:24:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:29.204 20:24:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:29.204 20:24:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:29.204 20:24:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:29.204 20:24:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:29.462 20:24:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:15:29.462 20:24:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:29.462 20:24:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:29.462 20:24:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:15:29.462 20:24:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:29.462 20:24:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:29.462 20:24:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key3 00:15:29.462 20:24:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:29.462 20:24:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:29.462 20:24:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:29.462 20:24:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:29.462 20:24:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:29.720 00:15:29.720 20:24:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:29.720 20:24:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:29.720 20:24:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:29.978 20:24:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:29.978 20:24:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:29.978 20:24:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:29.978 20:24:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:29.978 20:24:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:29.978 20:24:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:29.978 { 00:15:29.978 "cntlid": 87, 00:15:29.978 "qid": 0, 00:15:29.978 "state": "enabled", 00:15:29.978 "listen_address": { 00:15:29.978 "trtype": "RDMA", 00:15:29.978 "adrfam": "IPv4", 00:15:29.978 "traddr": "192.168.100.8", 00:15:29.978 "trsvcid": "4420" 00:15:29.978 }, 00:15:29.978 "peer_address": { 00:15:29.978 "trtype": "RDMA", 00:15:29.978 "adrfam": "IPv4", 00:15:29.978 "traddr": "192.168.100.8", 00:15:29.978 "trsvcid": "37118" 00:15:29.978 }, 00:15:29.978 "auth": { 00:15:29.978 "state": "completed", 00:15:29.978 "digest": "sha384", 00:15:29.978 "dhgroup": "ffdhe6144" 00:15:29.978 } 00:15:29.978 } 00:15:29.978 ]' 00:15:29.978 20:24:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:29.978 20:24:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:29.978 20:24:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:29.978 20:24:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:29.978 20:24:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:29.978 20:24:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:29.978 20:24:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:29.978 20:24:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:30.236 20:24:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:03:NTI1MTlmYjQwNDhmYTVhYmVkOGVjYTBlYjZlNWU1NDI2YjhkNjk2NjBhMTdlM2VhYmFlZmJmMWQwZDEwODhlNizRJFA=: 00:15:30.807 20:24:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:30.807 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:30.807 20:24:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:15:30.807 20:24:43 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:30.807 20:24:43 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:31.074 20:24:43 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:31.074 20:24:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:31.074 20:24:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:31.074 20:24:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:31.074 20:24:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:31.074 20:24:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:15:31.074 20:24:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:31.074 20:24:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:31.074 20:24:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:15:31.074 20:24:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:31.074 20:24:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:31.074 20:24:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:31.074 20:24:43 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:31.074 20:24:43 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:31.074 20:24:43 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:31.074 20:24:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:31.074 20:24:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:31.641 00:15:31.641 20:24:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:31.641 20:24:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:31.641 20:24:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:31.899 20:24:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:31.899 20:24:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:31.899 20:24:44 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:31.899 20:24:44 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:31.899 20:24:44 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:31.899 20:24:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:31.899 { 00:15:31.899 "cntlid": 89, 00:15:31.899 "qid": 0, 00:15:31.899 "state": "enabled", 00:15:31.899 "listen_address": { 00:15:31.899 "trtype": "RDMA", 00:15:31.899 "adrfam": "IPv4", 00:15:31.899 "traddr": "192.168.100.8", 00:15:31.899 "trsvcid": "4420" 00:15:31.899 }, 00:15:31.899 "peer_address": { 00:15:31.899 "trtype": "RDMA", 00:15:31.899 "adrfam": "IPv4", 00:15:31.899 "traddr": "192.168.100.8", 00:15:31.899 "trsvcid": "39835" 00:15:31.899 }, 00:15:31.899 "auth": { 00:15:31.899 "state": "completed", 00:15:31.899 "digest": "sha384", 00:15:31.899 "dhgroup": "ffdhe8192" 00:15:31.899 } 00:15:31.899 } 00:15:31.899 ]' 00:15:31.899 20:24:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:31.899 20:24:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:31.899 20:24:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:31.899 20:24:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:31.899 20:24:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:31.899 20:24:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:31.899 20:24:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:31.899 20:24:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:32.157 20:24:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:00:ZWFkZmI0NTkzNzAwZDdkMDY5MjAzNTJmZTI3OTE3MmEwZWVjZjgxMGU0NmI4NTAyYxRRfg==: --dhchap-ctrl-secret DHHC-1:03:YWQ1OTU2YWQ3ZTcwNDFlNTk0NWU3MWU3MTMxMzYyMmIwNWIwZjc3YmFiMjkwNjQ1ZmI4MmY4NzRlZTZlNzIyOGZgvG4=: 00:15:32.724 20:24:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:32.724 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:32.724 20:24:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:15:32.724 20:24:45 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:32.724 20:24:45 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:32.724 20:24:45 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:32.724 20:24:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:32.724 20:24:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:32.724 20:24:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:32.983 20:24:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:15:32.983 20:24:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:32.983 20:24:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:32.983 20:24:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:15:32.983 20:24:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:32.983 20:24:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:32.983 20:24:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:32.983 20:24:45 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:32.983 20:24:45 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:32.983 20:24:45 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:32.983 20:24:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:32.983 20:24:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:33.550 00:15:33.550 20:24:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:33.550 20:24:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:33.550 20:24:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:33.550 20:24:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:33.550 20:24:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:33.550 20:24:46 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:33.550 20:24:46 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:33.550 20:24:46 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:33.550 20:24:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:33.550 { 00:15:33.550 "cntlid": 91, 00:15:33.550 "qid": 0, 00:15:33.550 "state": "enabled", 00:15:33.550 "listen_address": { 00:15:33.550 "trtype": "RDMA", 00:15:33.550 "adrfam": "IPv4", 00:15:33.550 "traddr": "192.168.100.8", 00:15:33.550 "trsvcid": "4420" 00:15:33.550 }, 00:15:33.550 "peer_address": { 00:15:33.550 "trtype": "RDMA", 00:15:33.550 "adrfam": "IPv4", 00:15:33.550 "traddr": "192.168.100.8", 00:15:33.550 "trsvcid": "46541" 00:15:33.550 }, 00:15:33.550 "auth": { 00:15:33.550 "state": "completed", 00:15:33.550 "digest": "sha384", 00:15:33.550 "dhgroup": "ffdhe8192" 00:15:33.550 } 00:15:33.550 } 00:15:33.550 ]' 00:15:33.550 20:24:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:33.809 20:24:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:33.809 20:24:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:33.809 20:24:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:33.809 20:24:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:33.809 20:24:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:33.809 20:24:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:33.809 20:24:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:33.809 20:24:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:01:MjkxN2M0NmMwYzhmOGVmZWNlYjZiNTNlNzRkNWQ4NjEJtzcK: --dhchap-ctrl-secret DHHC-1:02:ZTQ1YTFjODRjNDA0NDc3YWFjNzI3NWNkYzRkZTk3NGJkMGFhZTBmZjNlZTE4NzE1y4GuKA==: 00:15:34.745 20:24:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:34.745 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:34.745 20:24:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:15:34.745 20:24:47 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:34.745 20:24:47 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:34.746 20:24:47 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:34.746 20:24:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:34.746 20:24:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:34.746 20:24:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:34.746 20:24:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:15:34.746 20:24:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:34.746 20:24:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:34.746 20:24:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:15:34.746 20:24:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:34.746 20:24:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:34.746 20:24:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:34.746 20:24:47 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:34.746 20:24:47 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:34.746 20:24:47 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:34.746 20:24:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:34.746 20:24:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:35.355 00:15:35.355 20:24:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:35.355 20:24:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:35.355 20:24:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:35.661 20:24:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:35.661 20:24:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:35.661 20:24:48 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:35.661 20:24:48 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:35.661 20:24:48 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:35.661 20:24:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:35.661 { 00:15:35.661 "cntlid": 93, 00:15:35.661 "qid": 0, 00:15:35.661 "state": "enabled", 00:15:35.661 "listen_address": { 00:15:35.661 "trtype": "RDMA", 00:15:35.661 "adrfam": "IPv4", 00:15:35.661 "traddr": "192.168.100.8", 00:15:35.661 "trsvcid": "4420" 00:15:35.661 }, 00:15:35.661 "peer_address": { 00:15:35.661 "trtype": "RDMA", 00:15:35.661 "adrfam": "IPv4", 00:15:35.661 "traddr": "192.168.100.8", 00:15:35.661 "trsvcid": "51092" 00:15:35.661 }, 00:15:35.661 "auth": { 00:15:35.661 "state": "completed", 00:15:35.661 "digest": "sha384", 00:15:35.661 "dhgroup": "ffdhe8192" 00:15:35.661 } 00:15:35.661 } 00:15:35.661 ]' 00:15:35.661 20:24:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:35.661 20:24:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:35.661 20:24:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:35.661 20:24:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:35.661 20:24:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:35.661 20:24:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:35.661 20:24:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:35.661 20:24:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:35.919 20:24:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:02:NTU4MTMyMjlkZTBkYWZjNTMyMWI1NDVkMDhlYjY4YjcyYWIwMDNmN2NiMDk0MmRm0wFoZA==: --dhchap-ctrl-secret DHHC-1:01:YmYzNzA3ZjY2NDhmMjlkOTY3MzE1NGRmNDMzZGYxMWbOAb25: 00:15:36.486 20:24:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:36.486 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:36.486 20:24:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:15:36.486 20:24:49 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:36.486 20:24:49 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:36.486 20:24:49 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:36.486 20:24:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:36.486 20:24:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:36.486 20:24:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:36.745 20:24:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:15:36.745 20:24:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:36.745 20:24:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:36.745 20:24:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:15:36.745 20:24:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:36.745 20:24:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:36.745 20:24:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key3 00:15:36.745 20:24:49 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:36.745 20:24:49 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:36.745 20:24:49 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:36.745 20:24:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:36.745 20:24:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:37.313 00:15:37.313 20:24:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:37.313 20:24:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:37.313 20:24:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:37.313 20:24:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:37.313 20:24:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:37.314 20:24:50 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:37.314 20:24:50 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:37.314 20:24:50 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:37.314 20:24:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:37.314 { 00:15:37.314 "cntlid": 95, 00:15:37.314 "qid": 0, 00:15:37.314 "state": "enabled", 00:15:37.314 "listen_address": { 00:15:37.314 "trtype": "RDMA", 00:15:37.314 "adrfam": "IPv4", 00:15:37.314 "traddr": "192.168.100.8", 00:15:37.314 "trsvcid": "4420" 00:15:37.314 }, 00:15:37.314 "peer_address": { 00:15:37.314 "trtype": "RDMA", 00:15:37.314 "adrfam": "IPv4", 00:15:37.314 "traddr": "192.168.100.8", 00:15:37.314 "trsvcid": "56908" 00:15:37.314 }, 00:15:37.314 "auth": { 00:15:37.314 "state": "completed", 00:15:37.314 "digest": "sha384", 00:15:37.314 "dhgroup": "ffdhe8192" 00:15:37.314 } 00:15:37.314 } 00:15:37.314 ]' 00:15:37.314 20:24:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:37.314 20:24:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:37.314 20:24:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:37.573 20:24:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:37.573 20:24:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:37.573 20:24:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:37.573 20:24:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:37.573 20:24:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:37.832 20:24:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:03:NTI1MTlmYjQwNDhmYTVhYmVkOGVjYTBlYjZlNWU1NDI2YjhkNjk2NjBhMTdlM2VhYmFlZmJmMWQwZDEwODhlNizRJFA=: 00:15:38.400 20:24:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:38.400 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:38.400 20:24:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:15:38.400 20:24:51 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:38.400 20:24:51 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:38.400 20:24:51 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:38.400 20:24:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:15:38.400 20:24:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:38.400 20:24:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:38.400 20:24:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:38.400 20:24:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:38.659 20:24:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:15:38.659 20:24:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:38.659 20:24:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:38.659 20:24:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:38.659 20:24:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:38.659 20:24:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:38.659 20:24:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:38.659 20:24:51 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:38.659 20:24:51 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:38.659 20:24:51 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:38.659 20:24:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:38.659 20:24:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:38.919 00:15:38.919 20:24:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:38.919 20:24:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:38.919 20:24:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:38.919 20:24:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:38.919 20:24:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:38.919 20:24:51 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:38.919 20:24:51 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:39.178 20:24:51 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:39.178 20:24:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:39.178 { 00:15:39.178 "cntlid": 97, 00:15:39.178 "qid": 0, 00:15:39.178 "state": "enabled", 00:15:39.178 "listen_address": { 00:15:39.178 "trtype": "RDMA", 00:15:39.178 "adrfam": "IPv4", 00:15:39.178 "traddr": "192.168.100.8", 00:15:39.178 "trsvcid": "4420" 00:15:39.178 }, 00:15:39.178 "peer_address": { 00:15:39.178 "trtype": "RDMA", 00:15:39.178 "adrfam": "IPv4", 00:15:39.178 "traddr": "192.168.100.8", 00:15:39.178 "trsvcid": "54453" 00:15:39.178 }, 00:15:39.178 "auth": { 00:15:39.178 "state": "completed", 00:15:39.178 "digest": "sha512", 00:15:39.178 "dhgroup": "null" 00:15:39.178 } 00:15:39.178 } 00:15:39.178 ]' 00:15:39.179 20:24:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:39.179 20:24:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:39.179 20:24:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:39.179 20:24:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:15:39.179 20:24:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:39.179 20:24:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:39.179 20:24:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:39.179 20:24:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:39.439 20:24:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:00:ZWFkZmI0NTkzNzAwZDdkMDY5MjAzNTJmZTI3OTE3MmEwZWVjZjgxMGU0NmI4NTAyYxRRfg==: --dhchap-ctrl-secret DHHC-1:03:YWQ1OTU2YWQ3ZTcwNDFlNTk0NWU3MWU3MTMxMzYyMmIwNWIwZjc3YmFiMjkwNjQ1ZmI4MmY4NzRlZTZlNzIyOGZgvG4=: 00:15:40.008 20:24:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:40.008 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:40.008 20:24:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:15:40.008 20:24:52 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:40.008 20:24:52 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:40.008 20:24:52 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:40.008 20:24:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:40.008 20:24:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:40.008 20:24:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:40.268 20:24:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:15:40.268 20:24:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:40.268 20:24:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:40.268 20:24:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:40.268 20:24:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:40.268 20:24:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:40.268 20:24:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:40.268 20:24:53 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:40.268 20:24:53 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:40.268 20:24:53 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:40.268 20:24:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:40.268 20:24:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:40.527 00:15:40.527 20:24:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:40.527 20:24:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:40.527 20:24:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:40.787 20:24:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:40.787 20:24:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:40.787 20:24:53 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:40.787 20:24:53 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:40.787 20:24:53 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:40.787 20:24:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:40.787 { 00:15:40.787 "cntlid": 99, 00:15:40.787 "qid": 0, 00:15:40.787 "state": "enabled", 00:15:40.787 "listen_address": { 00:15:40.787 "trtype": "RDMA", 00:15:40.787 "adrfam": "IPv4", 00:15:40.787 "traddr": "192.168.100.8", 00:15:40.787 "trsvcid": "4420" 00:15:40.787 }, 00:15:40.787 "peer_address": { 00:15:40.787 "trtype": "RDMA", 00:15:40.787 "adrfam": "IPv4", 00:15:40.787 "traddr": "192.168.100.8", 00:15:40.787 "trsvcid": "60177" 00:15:40.787 }, 00:15:40.787 "auth": { 00:15:40.787 "state": "completed", 00:15:40.787 "digest": "sha512", 00:15:40.787 "dhgroup": "null" 00:15:40.787 } 00:15:40.787 } 00:15:40.787 ]' 00:15:40.787 20:24:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:40.787 20:24:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:40.787 20:24:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:40.787 20:24:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:15:40.787 20:24:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:40.787 20:24:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:40.787 20:24:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:40.787 20:24:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:41.045 20:24:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:01:MjkxN2M0NmMwYzhmOGVmZWNlYjZiNTNlNzRkNWQ4NjEJtzcK: --dhchap-ctrl-secret DHHC-1:02:ZTQ1YTFjODRjNDA0NDc3YWFjNzI3NWNkYzRkZTk3NGJkMGFhZTBmZjNlZTE4NzE1y4GuKA==: 00:15:41.612 20:24:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:41.612 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:41.612 20:24:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:15:41.612 20:24:54 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:41.612 20:24:54 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:41.612 20:24:54 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:41.612 20:24:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:41.612 20:24:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:41.612 20:24:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:41.871 20:24:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:15:41.871 20:24:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:41.871 20:24:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:41.871 20:24:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:41.871 20:24:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:41.871 20:24:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:41.871 20:24:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:41.871 20:24:54 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:41.871 20:24:54 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:41.871 20:24:54 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:41.871 20:24:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:41.871 20:24:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:42.130 00:15:42.130 20:24:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:42.130 20:24:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:42.130 20:24:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:42.389 20:24:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:42.389 20:24:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:42.389 20:24:55 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:42.389 20:24:55 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:42.389 20:24:55 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:42.389 20:24:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:42.389 { 00:15:42.389 "cntlid": 101, 00:15:42.389 "qid": 0, 00:15:42.389 "state": "enabled", 00:15:42.389 "listen_address": { 00:15:42.389 "trtype": "RDMA", 00:15:42.389 "adrfam": "IPv4", 00:15:42.389 "traddr": "192.168.100.8", 00:15:42.389 "trsvcid": "4420" 00:15:42.389 }, 00:15:42.389 "peer_address": { 00:15:42.389 "trtype": "RDMA", 00:15:42.389 "adrfam": "IPv4", 00:15:42.389 "traddr": "192.168.100.8", 00:15:42.389 "trsvcid": "39296" 00:15:42.389 }, 00:15:42.389 "auth": { 00:15:42.389 "state": "completed", 00:15:42.389 "digest": "sha512", 00:15:42.389 "dhgroup": "null" 00:15:42.389 } 00:15:42.389 } 00:15:42.389 ]' 00:15:42.389 20:24:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:42.389 20:24:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:42.389 20:24:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:42.389 20:24:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:15:42.389 20:24:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:42.389 20:24:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:42.389 20:24:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:42.389 20:24:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:42.648 20:24:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:02:NTU4MTMyMjlkZTBkYWZjNTMyMWI1NDVkMDhlYjY4YjcyYWIwMDNmN2NiMDk0MmRm0wFoZA==: --dhchap-ctrl-secret DHHC-1:01:YmYzNzA3ZjY2NDhmMjlkOTY3MzE1NGRmNDMzZGYxMWbOAb25: 00:15:43.216 20:24:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:43.476 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:43.476 20:24:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:15:43.476 20:24:56 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:43.476 20:24:56 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:43.476 20:24:56 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:43.476 20:24:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:43.476 20:24:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:43.476 20:24:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:43.476 20:24:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:15:43.476 20:24:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:43.476 20:24:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:43.476 20:24:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:43.476 20:24:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:43.476 20:24:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:43.476 20:24:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key3 00:15:43.476 20:24:56 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:43.476 20:24:56 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:43.476 20:24:56 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:43.476 20:24:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:43.476 20:24:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:43.735 00:15:43.735 20:24:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:43.735 20:24:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:43.735 20:24:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:43.994 20:24:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:43.994 20:24:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:43.994 20:24:56 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:43.994 20:24:56 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:43.994 20:24:56 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:43.994 20:24:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:43.994 { 00:15:43.994 "cntlid": 103, 00:15:43.994 "qid": 0, 00:15:43.994 "state": "enabled", 00:15:43.994 "listen_address": { 00:15:43.994 "trtype": "RDMA", 00:15:43.994 "adrfam": "IPv4", 00:15:43.994 "traddr": "192.168.100.8", 00:15:43.994 "trsvcid": "4420" 00:15:43.994 }, 00:15:43.994 "peer_address": { 00:15:43.994 "trtype": "RDMA", 00:15:43.994 "adrfam": "IPv4", 00:15:43.994 "traddr": "192.168.100.8", 00:15:43.994 "trsvcid": "42461" 00:15:43.994 }, 00:15:43.994 "auth": { 00:15:43.994 "state": "completed", 00:15:43.994 "digest": "sha512", 00:15:43.994 "dhgroup": "null" 00:15:43.994 } 00:15:43.994 } 00:15:43.994 ]' 00:15:43.994 20:24:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:43.994 20:24:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:43.994 20:24:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:43.994 20:24:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:15:43.994 20:24:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:44.253 20:24:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:44.253 20:24:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:44.253 20:24:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:44.253 20:24:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:03:NTI1MTlmYjQwNDhmYTVhYmVkOGVjYTBlYjZlNWU1NDI2YjhkNjk2NjBhMTdlM2VhYmFlZmJmMWQwZDEwODhlNizRJFA=: 00:15:44.821 20:24:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:45.080 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:45.080 20:24:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:15:45.080 20:24:57 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:45.080 20:24:57 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:45.080 20:24:57 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:45.080 20:24:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:45.080 20:24:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:45.080 20:24:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:45.080 20:24:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:45.339 20:24:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:15:45.339 20:24:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:45.339 20:24:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:45.339 20:24:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:15:45.339 20:24:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:45.339 20:24:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:45.339 20:24:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:45.339 20:24:58 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:45.339 20:24:58 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:45.339 20:24:58 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:45.339 20:24:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:45.339 20:24:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:45.598 00:15:45.598 20:24:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:45.598 20:24:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:45.598 20:24:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:45.598 20:24:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:45.598 20:24:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:45.598 20:24:58 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:45.598 20:24:58 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:45.598 20:24:58 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:45.598 20:24:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:45.598 { 00:15:45.598 "cntlid": 105, 00:15:45.598 "qid": 0, 00:15:45.598 "state": "enabled", 00:15:45.598 "listen_address": { 00:15:45.598 "trtype": "RDMA", 00:15:45.598 "adrfam": "IPv4", 00:15:45.599 "traddr": "192.168.100.8", 00:15:45.599 "trsvcid": "4420" 00:15:45.599 }, 00:15:45.599 "peer_address": { 00:15:45.599 "trtype": "RDMA", 00:15:45.599 "adrfam": "IPv4", 00:15:45.599 "traddr": "192.168.100.8", 00:15:45.599 "trsvcid": "58414" 00:15:45.599 }, 00:15:45.599 "auth": { 00:15:45.599 "state": "completed", 00:15:45.599 "digest": "sha512", 00:15:45.599 "dhgroup": "ffdhe2048" 00:15:45.599 } 00:15:45.599 } 00:15:45.599 ]' 00:15:45.599 20:24:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:45.599 20:24:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:45.599 20:24:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:45.858 20:24:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:45.858 20:24:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:45.858 20:24:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:45.858 20:24:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:45.858 20:24:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:45.858 20:24:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:00:ZWFkZmI0NTkzNzAwZDdkMDY5MjAzNTJmZTI3OTE3MmEwZWVjZjgxMGU0NmI4NTAyYxRRfg==: --dhchap-ctrl-secret DHHC-1:03:YWQ1OTU2YWQ3ZTcwNDFlNTk0NWU3MWU3MTMxMzYyMmIwNWIwZjc3YmFiMjkwNjQ1ZmI4MmY4NzRlZTZlNzIyOGZgvG4=: 00:15:46.795 20:24:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:46.795 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:46.795 20:24:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:15:46.795 20:24:59 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:46.795 20:24:59 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:46.795 20:24:59 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:46.795 20:24:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:46.795 20:24:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:46.795 20:24:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:46.795 20:24:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:15:46.795 20:24:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:46.795 20:24:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:46.795 20:24:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:15:46.795 20:24:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:46.795 20:24:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:46.795 20:24:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:46.795 20:24:59 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:46.795 20:24:59 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:46.795 20:24:59 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:46.795 20:24:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:46.795 20:24:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:47.055 00:15:47.055 20:25:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:47.055 20:25:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:47.055 20:25:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:47.313 20:25:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:47.313 20:25:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:47.313 20:25:00 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:47.313 20:25:00 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:47.313 20:25:00 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:47.313 20:25:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:47.313 { 00:15:47.313 "cntlid": 107, 00:15:47.313 "qid": 0, 00:15:47.313 "state": "enabled", 00:15:47.313 "listen_address": { 00:15:47.313 "trtype": "RDMA", 00:15:47.313 "adrfam": "IPv4", 00:15:47.313 "traddr": "192.168.100.8", 00:15:47.313 "trsvcid": "4420" 00:15:47.313 }, 00:15:47.313 "peer_address": { 00:15:47.313 "trtype": "RDMA", 00:15:47.313 "adrfam": "IPv4", 00:15:47.313 "traddr": "192.168.100.8", 00:15:47.313 "trsvcid": "36199" 00:15:47.313 }, 00:15:47.313 "auth": { 00:15:47.313 "state": "completed", 00:15:47.313 "digest": "sha512", 00:15:47.313 "dhgroup": "ffdhe2048" 00:15:47.313 } 00:15:47.313 } 00:15:47.313 ]' 00:15:47.313 20:25:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:47.313 20:25:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:47.313 20:25:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:47.313 20:25:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:47.313 20:25:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:47.572 20:25:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:47.572 20:25:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:47.572 20:25:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:47.572 20:25:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:01:MjkxN2M0NmMwYzhmOGVmZWNlYjZiNTNlNzRkNWQ4NjEJtzcK: --dhchap-ctrl-secret DHHC-1:02:ZTQ1YTFjODRjNDA0NDc3YWFjNzI3NWNkYzRkZTk3NGJkMGFhZTBmZjNlZTE4NzE1y4GuKA==: 00:15:48.506 20:25:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:48.506 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:48.506 20:25:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:15:48.506 20:25:01 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:48.506 20:25:01 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:48.506 20:25:01 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:48.506 20:25:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:48.506 20:25:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:48.506 20:25:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:48.506 20:25:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:15:48.506 20:25:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:48.506 20:25:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:48.506 20:25:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:15:48.506 20:25:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:48.506 20:25:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:48.506 20:25:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:48.506 20:25:01 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:48.506 20:25:01 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:48.506 20:25:01 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:48.506 20:25:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:48.506 20:25:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:48.765 00:15:48.765 20:25:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:48.765 20:25:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:48.765 20:25:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:49.025 20:25:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:49.025 20:25:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:49.025 20:25:01 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:49.025 20:25:01 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:49.025 20:25:01 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:49.025 20:25:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:49.025 { 00:15:49.025 "cntlid": 109, 00:15:49.025 "qid": 0, 00:15:49.025 "state": "enabled", 00:15:49.025 "listen_address": { 00:15:49.025 "trtype": "RDMA", 00:15:49.025 "adrfam": "IPv4", 00:15:49.025 "traddr": "192.168.100.8", 00:15:49.025 "trsvcid": "4420" 00:15:49.025 }, 00:15:49.025 "peer_address": { 00:15:49.025 "trtype": "RDMA", 00:15:49.025 "adrfam": "IPv4", 00:15:49.025 "traddr": "192.168.100.8", 00:15:49.025 "trsvcid": "41269" 00:15:49.025 }, 00:15:49.025 "auth": { 00:15:49.025 "state": "completed", 00:15:49.025 "digest": "sha512", 00:15:49.025 "dhgroup": "ffdhe2048" 00:15:49.025 } 00:15:49.025 } 00:15:49.025 ]' 00:15:49.025 20:25:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:49.025 20:25:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:49.025 20:25:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:49.025 20:25:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:49.025 20:25:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:49.284 20:25:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:49.284 20:25:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:49.284 20:25:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:49.284 20:25:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:02:NTU4MTMyMjlkZTBkYWZjNTMyMWI1NDVkMDhlYjY4YjcyYWIwMDNmN2NiMDk0MmRm0wFoZA==: --dhchap-ctrl-secret DHHC-1:01:YmYzNzA3ZjY2NDhmMjlkOTY3MzE1NGRmNDMzZGYxMWbOAb25: 00:15:50.221 20:25:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:50.222 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:50.222 20:25:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:15:50.222 20:25:02 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:50.222 20:25:02 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:50.222 20:25:02 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:50.222 20:25:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:50.222 20:25:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:50.222 20:25:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:50.222 20:25:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:15:50.222 20:25:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:50.222 20:25:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:50.222 20:25:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:15:50.222 20:25:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:50.222 20:25:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:50.222 20:25:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key3 00:15:50.222 20:25:03 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:50.222 20:25:03 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:50.222 20:25:03 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:50.222 20:25:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:50.222 20:25:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:50.481 00:15:50.481 20:25:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:50.481 20:25:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:50.481 20:25:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:50.743 20:25:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:50.743 20:25:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:50.743 20:25:03 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:50.743 20:25:03 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:50.743 20:25:03 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:50.743 20:25:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:50.743 { 00:15:50.743 "cntlid": 111, 00:15:50.743 "qid": 0, 00:15:50.743 "state": "enabled", 00:15:50.743 "listen_address": { 00:15:50.743 "trtype": "RDMA", 00:15:50.743 "adrfam": "IPv4", 00:15:50.743 "traddr": "192.168.100.8", 00:15:50.743 "trsvcid": "4420" 00:15:50.743 }, 00:15:50.743 "peer_address": { 00:15:50.743 "trtype": "RDMA", 00:15:50.743 "adrfam": "IPv4", 00:15:50.743 "traddr": "192.168.100.8", 00:15:50.743 "trsvcid": "51891" 00:15:50.743 }, 00:15:50.743 "auth": { 00:15:50.743 "state": "completed", 00:15:50.743 "digest": "sha512", 00:15:50.743 "dhgroup": "ffdhe2048" 00:15:50.743 } 00:15:50.743 } 00:15:50.743 ]' 00:15:50.743 20:25:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:50.743 20:25:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:50.743 20:25:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:50.743 20:25:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:50.743 20:25:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:50.743 20:25:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:50.743 20:25:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:50.743 20:25:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:51.002 20:25:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:03:NTI1MTlmYjQwNDhmYTVhYmVkOGVjYTBlYjZlNWU1NDI2YjhkNjk2NjBhMTdlM2VhYmFlZmJmMWQwZDEwODhlNizRJFA=: 00:15:51.568 20:25:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:51.826 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:51.826 20:25:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:15:51.826 20:25:04 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:51.826 20:25:04 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:51.826 20:25:04 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:51.826 20:25:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:51.826 20:25:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:51.826 20:25:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:15:51.827 20:25:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:15:51.827 20:25:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:15:51.827 20:25:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:51.827 20:25:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:51.827 20:25:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:15:51.827 20:25:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:51.827 20:25:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:51.827 20:25:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:51.827 20:25:04 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:51.827 20:25:04 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:51.827 20:25:04 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:51.827 20:25:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:51.827 20:25:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:52.085 00:15:52.085 20:25:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:52.085 20:25:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:52.085 20:25:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:52.344 20:25:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:52.344 20:25:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:52.344 20:25:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:52.344 20:25:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:52.344 20:25:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:52.344 20:25:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:52.344 { 00:15:52.344 "cntlid": 113, 00:15:52.344 "qid": 0, 00:15:52.344 "state": "enabled", 00:15:52.344 "listen_address": { 00:15:52.344 "trtype": "RDMA", 00:15:52.344 "adrfam": "IPv4", 00:15:52.344 "traddr": "192.168.100.8", 00:15:52.344 "trsvcid": "4420" 00:15:52.344 }, 00:15:52.344 "peer_address": { 00:15:52.344 "trtype": "RDMA", 00:15:52.344 "adrfam": "IPv4", 00:15:52.344 "traddr": "192.168.100.8", 00:15:52.344 "trsvcid": "52742" 00:15:52.344 }, 00:15:52.344 "auth": { 00:15:52.344 "state": "completed", 00:15:52.344 "digest": "sha512", 00:15:52.344 "dhgroup": "ffdhe3072" 00:15:52.344 } 00:15:52.344 } 00:15:52.344 ]' 00:15:52.344 20:25:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:52.344 20:25:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:52.344 20:25:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:52.344 20:25:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:52.344 20:25:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:52.344 20:25:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:52.344 20:25:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:52.344 20:25:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:52.602 20:25:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:00:ZWFkZmI0NTkzNzAwZDdkMDY5MjAzNTJmZTI3OTE3MmEwZWVjZjgxMGU0NmI4NTAyYxRRfg==: --dhchap-ctrl-secret DHHC-1:03:YWQ1OTU2YWQ3ZTcwNDFlNTk0NWU3MWU3MTMxMzYyMmIwNWIwZjc3YmFiMjkwNjQ1ZmI4MmY4NzRlZTZlNzIyOGZgvG4=: 00:15:53.171 20:25:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:53.430 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:53.430 20:25:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:15:53.430 20:25:06 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:53.430 20:25:06 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:53.430 20:25:06 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:53.430 20:25:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:53.430 20:25:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:15:53.430 20:25:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:15:53.430 20:25:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:15:53.430 20:25:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:53.430 20:25:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:53.430 20:25:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:15:53.430 20:25:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:53.430 20:25:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:53.430 20:25:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:53.430 20:25:06 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:53.430 20:25:06 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:53.430 20:25:06 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:53.430 20:25:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:53.430 20:25:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:53.689 00:15:53.689 20:25:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:53.689 20:25:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:53.689 20:25:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:53.950 20:25:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:53.950 20:25:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:53.950 20:25:06 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:53.950 20:25:06 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:53.950 20:25:06 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:53.950 20:25:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:53.950 { 00:15:53.950 "cntlid": 115, 00:15:53.950 "qid": 0, 00:15:53.950 "state": "enabled", 00:15:53.950 "listen_address": { 00:15:53.950 "trtype": "RDMA", 00:15:53.950 "adrfam": "IPv4", 00:15:53.950 "traddr": "192.168.100.8", 00:15:53.950 "trsvcid": "4420" 00:15:53.950 }, 00:15:53.950 "peer_address": { 00:15:53.950 "trtype": "RDMA", 00:15:53.950 "adrfam": "IPv4", 00:15:53.950 "traddr": "192.168.100.8", 00:15:53.950 "trsvcid": "33212" 00:15:53.950 }, 00:15:53.950 "auth": { 00:15:53.950 "state": "completed", 00:15:53.950 "digest": "sha512", 00:15:53.950 "dhgroup": "ffdhe3072" 00:15:53.950 } 00:15:53.950 } 00:15:53.950 ]' 00:15:53.950 20:25:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:53.950 20:25:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:53.950 20:25:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:53.950 20:25:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:53.950 20:25:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:54.210 20:25:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:54.210 20:25:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:54.210 20:25:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:54.210 20:25:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:01:MjkxN2M0NmMwYzhmOGVmZWNlYjZiNTNlNzRkNWQ4NjEJtzcK: --dhchap-ctrl-secret DHHC-1:02:ZTQ1YTFjODRjNDA0NDc3YWFjNzI3NWNkYzRkZTk3NGJkMGFhZTBmZjNlZTE4NzE1y4GuKA==: 00:15:54.779 20:25:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:55.038 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:55.038 20:25:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:15:55.038 20:25:07 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:55.038 20:25:07 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:55.038 20:25:07 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:55.038 20:25:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:55.038 20:25:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:15:55.038 20:25:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:15:55.297 20:25:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:15:55.297 20:25:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:55.297 20:25:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:55.297 20:25:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:15:55.297 20:25:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:55.297 20:25:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:55.297 20:25:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:55.297 20:25:08 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:55.297 20:25:08 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:55.297 20:25:08 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:55.297 20:25:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:55.297 20:25:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:55.297 00:15:55.556 20:25:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:55.556 20:25:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:55.556 20:25:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:55.556 20:25:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:55.556 20:25:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:55.556 20:25:08 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:55.556 20:25:08 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:55.556 20:25:08 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:55.556 20:25:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:55.556 { 00:15:55.556 "cntlid": 117, 00:15:55.556 "qid": 0, 00:15:55.556 "state": "enabled", 00:15:55.556 "listen_address": { 00:15:55.556 "trtype": "RDMA", 00:15:55.556 "adrfam": "IPv4", 00:15:55.556 "traddr": "192.168.100.8", 00:15:55.556 "trsvcid": "4420" 00:15:55.556 }, 00:15:55.556 "peer_address": { 00:15:55.556 "trtype": "RDMA", 00:15:55.556 "adrfam": "IPv4", 00:15:55.556 "traddr": "192.168.100.8", 00:15:55.556 "trsvcid": "36832" 00:15:55.556 }, 00:15:55.556 "auth": { 00:15:55.556 "state": "completed", 00:15:55.556 "digest": "sha512", 00:15:55.556 "dhgroup": "ffdhe3072" 00:15:55.556 } 00:15:55.556 } 00:15:55.556 ]' 00:15:55.556 20:25:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:55.556 20:25:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:55.556 20:25:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:55.815 20:25:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:55.815 20:25:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:55.815 20:25:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:55.815 20:25:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:55.815 20:25:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:55.815 20:25:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:02:NTU4MTMyMjlkZTBkYWZjNTMyMWI1NDVkMDhlYjY4YjcyYWIwMDNmN2NiMDk0MmRm0wFoZA==: --dhchap-ctrl-secret DHHC-1:01:YmYzNzA3ZjY2NDhmMjlkOTY3MzE1NGRmNDMzZGYxMWbOAb25: 00:15:56.753 20:25:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:56.753 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:56.753 20:25:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:15:56.753 20:25:09 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:56.753 20:25:09 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:56.753 20:25:09 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:56.753 20:25:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:56.753 20:25:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:15:56.753 20:25:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:15:56.753 20:25:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:15:56.753 20:25:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:56.753 20:25:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:56.753 20:25:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:15:56.753 20:25:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:56.753 20:25:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:56.753 20:25:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key3 00:15:56.753 20:25:09 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:56.753 20:25:09 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:56.753 20:25:09 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:56.753 20:25:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:56.753 20:25:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:57.012 00:15:57.012 20:25:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:57.012 20:25:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:57.012 20:25:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:57.271 20:25:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:57.271 20:25:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:57.271 20:25:10 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:57.271 20:25:10 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:57.271 20:25:10 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:57.271 20:25:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:57.271 { 00:15:57.271 "cntlid": 119, 00:15:57.271 "qid": 0, 00:15:57.271 "state": "enabled", 00:15:57.271 "listen_address": { 00:15:57.271 "trtype": "RDMA", 00:15:57.271 "adrfam": "IPv4", 00:15:57.271 "traddr": "192.168.100.8", 00:15:57.271 "trsvcid": "4420" 00:15:57.271 }, 00:15:57.271 "peer_address": { 00:15:57.271 "trtype": "RDMA", 00:15:57.271 "adrfam": "IPv4", 00:15:57.271 "traddr": "192.168.100.8", 00:15:57.271 "trsvcid": "46523" 00:15:57.271 }, 00:15:57.271 "auth": { 00:15:57.271 "state": "completed", 00:15:57.271 "digest": "sha512", 00:15:57.271 "dhgroup": "ffdhe3072" 00:15:57.271 } 00:15:57.271 } 00:15:57.271 ]' 00:15:57.271 20:25:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:57.271 20:25:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:57.271 20:25:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:57.271 20:25:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:57.271 20:25:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:57.271 20:25:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:57.271 20:25:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:57.271 20:25:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:57.530 20:25:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:03:NTI1MTlmYjQwNDhmYTVhYmVkOGVjYTBlYjZlNWU1NDI2YjhkNjk2NjBhMTdlM2VhYmFlZmJmMWQwZDEwODhlNizRJFA=: 00:15:58.097 20:25:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:58.357 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:58.357 20:25:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:15:58.357 20:25:11 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:58.357 20:25:11 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.357 20:25:11 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:58.357 20:25:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:58.357 20:25:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:58.357 20:25:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:15:58.357 20:25:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:15:58.357 20:25:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:15:58.357 20:25:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:58.357 20:25:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:58.357 20:25:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:15:58.357 20:25:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:58.357 20:25:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:58.357 20:25:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:58.357 20:25:11 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:58.357 20:25:11 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.357 20:25:11 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:58.357 20:25:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:58.357 20:25:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:58.617 00:15:58.617 20:25:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:58.617 20:25:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:58.617 20:25:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:58.877 20:25:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:58.877 20:25:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:58.877 20:25:11 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:58.877 20:25:11 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.877 20:25:11 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:58.877 20:25:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:58.877 { 00:15:58.877 "cntlid": 121, 00:15:58.877 "qid": 0, 00:15:58.877 "state": "enabled", 00:15:58.877 "listen_address": { 00:15:58.877 "trtype": "RDMA", 00:15:58.877 "adrfam": "IPv4", 00:15:58.877 "traddr": "192.168.100.8", 00:15:58.877 "trsvcid": "4420" 00:15:58.877 }, 00:15:58.877 "peer_address": { 00:15:58.877 "trtype": "RDMA", 00:15:58.877 "adrfam": "IPv4", 00:15:58.877 "traddr": "192.168.100.8", 00:15:58.877 "trsvcid": "60375" 00:15:58.877 }, 00:15:58.877 "auth": { 00:15:58.877 "state": "completed", 00:15:58.877 "digest": "sha512", 00:15:58.877 "dhgroup": "ffdhe4096" 00:15:58.877 } 00:15:58.877 } 00:15:58.877 ]' 00:15:58.877 20:25:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:58.877 20:25:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:58.877 20:25:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:59.138 20:25:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:59.138 20:25:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:59.138 20:25:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:59.138 20:25:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:59.138 20:25:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:59.138 20:25:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:00:ZWFkZmI0NTkzNzAwZDdkMDY5MjAzNTJmZTI3OTE3MmEwZWVjZjgxMGU0NmI4NTAyYxRRfg==: --dhchap-ctrl-secret DHHC-1:03:YWQ1OTU2YWQ3ZTcwNDFlNTk0NWU3MWU3MTMxMzYyMmIwNWIwZjc3YmFiMjkwNjQ1ZmI4MmY4NzRlZTZlNzIyOGZgvG4=: 00:16:00.131 20:25:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:00.131 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:00.131 20:25:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:16:00.131 20:25:12 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:00.131 20:25:12 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:00.131 20:25:12 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:00.131 20:25:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:00.131 20:25:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:00.131 20:25:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:00.131 20:25:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:16:00.131 20:25:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:00.131 20:25:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:00.131 20:25:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:00.131 20:25:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:00.131 20:25:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:00.131 20:25:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:00.131 20:25:13 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:00.131 20:25:13 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:00.131 20:25:13 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:00.132 20:25:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:00.132 20:25:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:00.396 00:16:00.396 20:25:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:00.396 20:25:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:00.396 20:25:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:00.655 20:25:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:00.655 20:25:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:00.655 20:25:13 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:00.655 20:25:13 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:00.655 20:25:13 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:00.655 20:25:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:00.655 { 00:16:00.655 "cntlid": 123, 00:16:00.655 "qid": 0, 00:16:00.655 "state": "enabled", 00:16:00.655 "listen_address": { 00:16:00.655 "trtype": "RDMA", 00:16:00.655 "adrfam": "IPv4", 00:16:00.655 "traddr": "192.168.100.8", 00:16:00.655 "trsvcid": "4420" 00:16:00.655 }, 00:16:00.655 "peer_address": { 00:16:00.655 "trtype": "RDMA", 00:16:00.655 "adrfam": "IPv4", 00:16:00.655 "traddr": "192.168.100.8", 00:16:00.655 "trsvcid": "49654" 00:16:00.655 }, 00:16:00.655 "auth": { 00:16:00.655 "state": "completed", 00:16:00.655 "digest": "sha512", 00:16:00.655 "dhgroup": "ffdhe4096" 00:16:00.655 } 00:16:00.655 } 00:16:00.655 ]' 00:16:00.655 20:25:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:00.655 20:25:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:00.655 20:25:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:00.655 20:25:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:00.655 20:25:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:00.655 20:25:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:00.655 20:25:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:00.655 20:25:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:00.914 20:25:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:01:MjkxN2M0NmMwYzhmOGVmZWNlYjZiNTNlNzRkNWQ4NjEJtzcK: --dhchap-ctrl-secret DHHC-1:02:ZTQ1YTFjODRjNDA0NDc3YWFjNzI3NWNkYzRkZTk3NGJkMGFhZTBmZjNlZTE4NzE1y4GuKA==: 00:16:01.481 20:25:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:01.481 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:01.481 20:25:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:16:01.481 20:25:14 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:01.481 20:25:14 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:01.741 20:25:14 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:01.741 20:25:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:01.741 20:25:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:01.741 20:25:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:01.741 20:25:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:16:01.741 20:25:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:01.741 20:25:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:01.741 20:25:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:01.741 20:25:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:01.741 20:25:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:01.741 20:25:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:01.741 20:25:14 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:01.741 20:25:14 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:01.741 20:25:14 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:01.741 20:25:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:01.741 20:25:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:02.000 00:16:02.000 20:25:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:02.000 20:25:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:02.000 20:25:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:02.259 20:25:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:02.259 20:25:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:02.259 20:25:15 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:02.259 20:25:15 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:02.259 20:25:15 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:02.259 20:25:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:02.259 { 00:16:02.259 "cntlid": 125, 00:16:02.259 "qid": 0, 00:16:02.259 "state": "enabled", 00:16:02.259 "listen_address": { 00:16:02.259 "trtype": "RDMA", 00:16:02.259 "adrfam": "IPv4", 00:16:02.259 "traddr": "192.168.100.8", 00:16:02.259 "trsvcid": "4420" 00:16:02.259 }, 00:16:02.259 "peer_address": { 00:16:02.259 "trtype": "RDMA", 00:16:02.259 "adrfam": "IPv4", 00:16:02.259 "traddr": "192.168.100.8", 00:16:02.259 "trsvcid": "33746" 00:16:02.259 }, 00:16:02.259 "auth": { 00:16:02.259 "state": "completed", 00:16:02.259 "digest": "sha512", 00:16:02.259 "dhgroup": "ffdhe4096" 00:16:02.259 } 00:16:02.259 } 00:16:02.259 ]' 00:16:02.259 20:25:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:02.259 20:25:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:02.259 20:25:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:02.259 20:25:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:02.259 20:25:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:02.259 20:25:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:02.259 20:25:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:02.259 20:25:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:02.518 20:25:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:02:NTU4MTMyMjlkZTBkYWZjNTMyMWI1NDVkMDhlYjY4YjcyYWIwMDNmN2NiMDk0MmRm0wFoZA==: --dhchap-ctrl-secret DHHC-1:01:YmYzNzA3ZjY2NDhmMjlkOTY3MzE1NGRmNDMzZGYxMWbOAb25: 00:16:03.085 20:25:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:03.342 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:03.342 20:25:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:16:03.342 20:25:16 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:03.342 20:25:16 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:03.342 20:25:16 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:03.342 20:25:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:03.342 20:25:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:03.342 20:25:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:03.342 20:25:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:16:03.342 20:25:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:03.342 20:25:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:03.342 20:25:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:03.342 20:25:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:03.342 20:25:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:03.342 20:25:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key3 00:16:03.342 20:25:16 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:03.342 20:25:16 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:03.342 20:25:16 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:03.342 20:25:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:03.342 20:25:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:03.600 00:16:03.858 20:25:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:03.858 20:25:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:03.858 20:25:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:03.858 20:25:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:03.858 20:25:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:03.858 20:25:16 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:03.858 20:25:16 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:03.858 20:25:16 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:03.858 20:25:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:03.858 { 00:16:03.858 "cntlid": 127, 00:16:03.858 "qid": 0, 00:16:03.858 "state": "enabled", 00:16:03.858 "listen_address": { 00:16:03.858 "trtype": "RDMA", 00:16:03.858 "adrfam": "IPv4", 00:16:03.858 "traddr": "192.168.100.8", 00:16:03.858 "trsvcid": "4420" 00:16:03.858 }, 00:16:03.858 "peer_address": { 00:16:03.858 "trtype": "RDMA", 00:16:03.858 "adrfam": "IPv4", 00:16:03.858 "traddr": "192.168.100.8", 00:16:03.858 "trsvcid": "53270" 00:16:03.858 }, 00:16:03.858 "auth": { 00:16:03.858 "state": "completed", 00:16:03.858 "digest": "sha512", 00:16:03.858 "dhgroup": "ffdhe4096" 00:16:03.858 } 00:16:03.858 } 00:16:03.858 ]' 00:16:03.858 20:25:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:03.858 20:25:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:03.858 20:25:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:04.116 20:25:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:04.116 20:25:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:04.116 20:25:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:04.116 20:25:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:04.116 20:25:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:04.116 20:25:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:03:NTI1MTlmYjQwNDhmYTVhYmVkOGVjYTBlYjZlNWU1NDI2YjhkNjk2NjBhMTdlM2VhYmFlZmJmMWQwZDEwODhlNizRJFA=: 00:16:05.053 20:25:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:05.053 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:05.053 20:25:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:16:05.053 20:25:17 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:05.053 20:25:17 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:05.053 20:25:17 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:05.053 20:25:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:05.053 20:25:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:05.053 20:25:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:05.053 20:25:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:05.053 20:25:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:16:05.053 20:25:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:05.053 20:25:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:05.053 20:25:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:05.053 20:25:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:05.053 20:25:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:05.053 20:25:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:05.053 20:25:17 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:05.053 20:25:17 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:05.053 20:25:17 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:05.053 20:25:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:05.053 20:25:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:05.311 00:16:05.570 20:25:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:05.570 20:25:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:05.570 20:25:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:05.570 20:25:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:05.570 20:25:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:05.570 20:25:18 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:05.570 20:25:18 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:05.570 20:25:18 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:05.570 20:25:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:05.570 { 00:16:05.570 "cntlid": 129, 00:16:05.570 "qid": 0, 00:16:05.570 "state": "enabled", 00:16:05.570 "listen_address": { 00:16:05.570 "trtype": "RDMA", 00:16:05.570 "adrfam": "IPv4", 00:16:05.570 "traddr": "192.168.100.8", 00:16:05.570 "trsvcid": "4420" 00:16:05.570 }, 00:16:05.570 "peer_address": { 00:16:05.570 "trtype": "RDMA", 00:16:05.570 "adrfam": "IPv4", 00:16:05.570 "traddr": "192.168.100.8", 00:16:05.570 "trsvcid": "46807" 00:16:05.570 }, 00:16:05.570 "auth": { 00:16:05.570 "state": "completed", 00:16:05.570 "digest": "sha512", 00:16:05.570 "dhgroup": "ffdhe6144" 00:16:05.570 } 00:16:05.570 } 00:16:05.570 ]' 00:16:05.570 20:25:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:05.570 20:25:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:05.570 20:25:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:05.828 20:25:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:05.828 20:25:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:05.828 20:25:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:05.828 20:25:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:05.828 20:25:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:05.828 20:25:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:00:ZWFkZmI0NTkzNzAwZDdkMDY5MjAzNTJmZTI3OTE3MmEwZWVjZjgxMGU0NmI4NTAyYxRRfg==: --dhchap-ctrl-secret DHHC-1:03:YWQ1OTU2YWQ3ZTcwNDFlNTk0NWU3MWU3MTMxMzYyMmIwNWIwZjc3YmFiMjkwNjQ1ZmI4MmY4NzRlZTZlNzIyOGZgvG4=: 00:16:06.766 20:25:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:06.766 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:06.766 20:25:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:16:06.766 20:25:19 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:06.766 20:25:19 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.766 20:25:19 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:06.766 20:25:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:06.766 20:25:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:06.766 20:25:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:06.766 20:25:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:16:06.766 20:25:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:06.766 20:25:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:06.766 20:25:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:06.766 20:25:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:06.766 20:25:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:06.766 20:25:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:06.766 20:25:19 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:06.766 20:25:19 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.766 20:25:19 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:06.766 20:25:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:06.766 20:25:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:07.336 00:16:07.336 20:25:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:07.336 20:25:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:07.336 20:25:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:07.336 20:25:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:07.336 20:25:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:07.336 20:25:20 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:07.336 20:25:20 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.336 20:25:20 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:07.336 20:25:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:07.336 { 00:16:07.336 "cntlid": 131, 00:16:07.336 "qid": 0, 00:16:07.336 "state": "enabled", 00:16:07.336 "listen_address": { 00:16:07.336 "trtype": "RDMA", 00:16:07.336 "adrfam": "IPv4", 00:16:07.336 "traddr": "192.168.100.8", 00:16:07.336 "trsvcid": "4420" 00:16:07.336 }, 00:16:07.336 "peer_address": { 00:16:07.336 "trtype": "RDMA", 00:16:07.336 "adrfam": "IPv4", 00:16:07.336 "traddr": "192.168.100.8", 00:16:07.336 "trsvcid": "44233" 00:16:07.336 }, 00:16:07.336 "auth": { 00:16:07.336 "state": "completed", 00:16:07.336 "digest": "sha512", 00:16:07.336 "dhgroup": "ffdhe6144" 00:16:07.336 } 00:16:07.336 } 00:16:07.336 ]' 00:16:07.336 20:25:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:07.336 20:25:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:07.336 20:25:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:07.595 20:25:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:07.595 20:25:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:07.595 20:25:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:07.595 20:25:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:07.595 20:25:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:07.595 20:25:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:01:MjkxN2M0NmMwYzhmOGVmZWNlYjZiNTNlNzRkNWQ4NjEJtzcK: --dhchap-ctrl-secret DHHC-1:02:ZTQ1YTFjODRjNDA0NDc3YWFjNzI3NWNkYzRkZTk3NGJkMGFhZTBmZjNlZTE4NzE1y4GuKA==: 00:16:08.532 20:25:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:08.532 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:08.532 20:25:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:16:08.532 20:25:21 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:08.532 20:25:21 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.532 20:25:21 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:08.532 20:25:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:08.532 20:25:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:08.532 20:25:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:08.532 20:25:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:16:08.532 20:25:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:08.532 20:25:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:08.532 20:25:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:08.532 20:25:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:08.532 20:25:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:08.532 20:25:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:08.532 20:25:21 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:08.532 20:25:21 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.532 20:25:21 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:08.532 20:25:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:08.532 20:25:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:09.099 00:16:09.099 20:25:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:09.099 20:25:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:09.099 20:25:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:09.099 20:25:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:09.099 20:25:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:09.099 20:25:21 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:09.099 20:25:21 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.099 20:25:22 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:09.099 20:25:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:09.099 { 00:16:09.099 "cntlid": 133, 00:16:09.099 "qid": 0, 00:16:09.099 "state": "enabled", 00:16:09.099 "listen_address": { 00:16:09.099 "trtype": "RDMA", 00:16:09.099 "adrfam": "IPv4", 00:16:09.099 "traddr": "192.168.100.8", 00:16:09.099 "trsvcid": "4420" 00:16:09.099 }, 00:16:09.099 "peer_address": { 00:16:09.099 "trtype": "RDMA", 00:16:09.099 "adrfam": "IPv4", 00:16:09.099 "traddr": "192.168.100.8", 00:16:09.099 "trsvcid": "45970" 00:16:09.099 }, 00:16:09.099 "auth": { 00:16:09.099 "state": "completed", 00:16:09.099 "digest": "sha512", 00:16:09.099 "dhgroup": "ffdhe6144" 00:16:09.099 } 00:16:09.099 } 00:16:09.099 ]' 00:16:09.099 20:25:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:09.099 20:25:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:09.099 20:25:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:09.099 20:25:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:09.099 20:25:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:09.358 20:25:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:09.358 20:25:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:09.358 20:25:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:09.358 20:25:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:02:NTU4MTMyMjlkZTBkYWZjNTMyMWI1NDVkMDhlYjY4YjcyYWIwMDNmN2NiMDk0MmRm0wFoZA==: --dhchap-ctrl-secret DHHC-1:01:YmYzNzA3ZjY2NDhmMjlkOTY3MzE1NGRmNDMzZGYxMWbOAb25: 00:16:10.294 20:25:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:10.294 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:10.294 20:25:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:16:10.294 20:25:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:10.294 20:25:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.294 20:25:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:10.294 20:25:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:10.294 20:25:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:10.294 20:25:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:10.294 20:25:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:16:10.294 20:25:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:10.294 20:25:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:10.294 20:25:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:10.294 20:25:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:10.294 20:25:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:10.294 20:25:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key3 00:16:10.294 20:25:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:10.294 20:25:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.294 20:25:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:10.294 20:25:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:10.294 20:25:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:10.552 00:16:10.812 20:25:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:10.812 20:25:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:10.812 20:25:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:10.812 20:25:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:10.812 20:25:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:10.812 20:25:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:10.812 20:25:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.812 20:25:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:10.812 20:25:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:10.812 { 00:16:10.812 "cntlid": 135, 00:16:10.812 "qid": 0, 00:16:10.812 "state": "enabled", 00:16:10.812 "listen_address": { 00:16:10.812 "trtype": "RDMA", 00:16:10.812 "adrfam": "IPv4", 00:16:10.812 "traddr": "192.168.100.8", 00:16:10.812 "trsvcid": "4420" 00:16:10.812 }, 00:16:10.812 "peer_address": { 00:16:10.812 "trtype": "RDMA", 00:16:10.812 "adrfam": "IPv4", 00:16:10.812 "traddr": "192.168.100.8", 00:16:10.812 "trsvcid": "41526" 00:16:10.812 }, 00:16:10.812 "auth": { 00:16:10.812 "state": "completed", 00:16:10.812 "digest": "sha512", 00:16:10.812 "dhgroup": "ffdhe6144" 00:16:10.812 } 00:16:10.812 } 00:16:10.812 ]' 00:16:10.812 20:25:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:10.812 20:25:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:10.812 20:25:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:11.070 20:25:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:11.070 20:25:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:11.070 20:25:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:11.070 20:25:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:11.070 20:25:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:11.070 20:25:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:03:NTI1MTlmYjQwNDhmYTVhYmVkOGVjYTBlYjZlNWU1NDI2YjhkNjk2NjBhMTdlM2VhYmFlZmJmMWQwZDEwODhlNizRJFA=: 00:16:12.007 20:25:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:12.007 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:12.007 20:25:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:16:12.007 20:25:24 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:12.007 20:25:24 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.007 20:25:24 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:12.007 20:25:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:12.007 20:25:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:12.007 20:25:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:12.007 20:25:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:12.007 20:25:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:16:12.007 20:25:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:12.007 20:25:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:12.007 20:25:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:12.007 20:25:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:12.007 20:25:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:12.007 20:25:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:12.007 20:25:24 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:12.007 20:25:24 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.007 20:25:24 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:12.007 20:25:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:12.007 20:25:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:12.576 00:16:12.576 20:25:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:12.576 20:25:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:12.576 20:25:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:12.835 20:25:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:12.835 20:25:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:12.835 20:25:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:12.835 20:25:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.835 20:25:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:12.835 20:25:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:12.835 { 00:16:12.835 "cntlid": 137, 00:16:12.835 "qid": 0, 00:16:12.835 "state": "enabled", 00:16:12.835 "listen_address": { 00:16:12.835 "trtype": "RDMA", 00:16:12.835 "adrfam": "IPv4", 00:16:12.835 "traddr": "192.168.100.8", 00:16:12.835 "trsvcid": "4420" 00:16:12.835 }, 00:16:12.835 "peer_address": { 00:16:12.835 "trtype": "RDMA", 00:16:12.835 "adrfam": "IPv4", 00:16:12.835 "traddr": "192.168.100.8", 00:16:12.835 "trsvcid": "59107" 00:16:12.835 }, 00:16:12.835 "auth": { 00:16:12.835 "state": "completed", 00:16:12.835 "digest": "sha512", 00:16:12.835 "dhgroup": "ffdhe8192" 00:16:12.835 } 00:16:12.835 } 00:16:12.835 ]' 00:16:12.835 20:25:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:12.835 20:25:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:12.835 20:25:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:12.835 20:25:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:12.835 20:25:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:12.835 20:25:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:12.835 20:25:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:12.835 20:25:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:13.094 20:25:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:00:ZWFkZmI0NTkzNzAwZDdkMDY5MjAzNTJmZTI3OTE3MmEwZWVjZjgxMGU0NmI4NTAyYxRRfg==: --dhchap-ctrl-secret DHHC-1:03:YWQ1OTU2YWQ3ZTcwNDFlNTk0NWU3MWU3MTMxMzYyMmIwNWIwZjc3YmFiMjkwNjQ1ZmI4MmY4NzRlZTZlNzIyOGZgvG4=: 00:16:13.662 20:25:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:13.662 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:13.662 20:25:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:16:13.662 20:25:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:13.662 20:25:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.662 20:25:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:13.662 20:25:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:13.662 20:25:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:13.662 20:25:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:13.921 20:25:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:16:13.921 20:25:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:13.921 20:25:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:13.921 20:25:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:13.921 20:25:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:13.921 20:25:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:13.921 20:25:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:13.921 20:25:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:13.921 20:25:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.921 20:25:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:13.921 20:25:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:13.921 20:25:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:14.490 00:16:14.490 20:25:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:14.490 20:25:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:14.490 20:25:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:14.490 20:25:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:14.490 20:25:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:14.490 20:25:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:14.490 20:25:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:14.490 20:25:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:14.490 20:25:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:14.490 { 00:16:14.490 "cntlid": 139, 00:16:14.490 "qid": 0, 00:16:14.490 "state": "enabled", 00:16:14.490 "listen_address": { 00:16:14.490 "trtype": "RDMA", 00:16:14.490 "adrfam": "IPv4", 00:16:14.490 "traddr": "192.168.100.8", 00:16:14.490 "trsvcid": "4420" 00:16:14.490 }, 00:16:14.490 "peer_address": { 00:16:14.490 "trtype": "RDMA", 00:16:14.490 "adrfam": "IPv4", 00:16:14.490 "traddr": "192.168.100.8", 00:16:14.490 "trsvcid": "60037" 00:16:14.490 }, 00:16:14.490 "auth": { 00:16:14.490 "state": "completed", 00:16:14.490 "digest": "sha512", 00:16:14.490 "dhgroup": "ffdhe8192" 00:16:14.490 } 00:16:14.490 } 00:16:14.490 ]' 00:16:14.490 20:25:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:14.750 20:25:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:14.750 20:25:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:14.750 20:25:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:14.750 20:25:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:14.750 20:25:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:14.750 20:25:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:14.750 20:25:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:15.009 20:25:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:01:MjkxN2M0NmMwYzhmOGVmZWNlYjZiNTNlNzRkNWQ4NjEJtzcK: --dhchap-ctrl-secret DHHC-1:02:ZTQ1YTFjODRjNDA0NDc3YWFjNzI3NWNkYzRkZTk3NGJkMGFhZTBmZjNlZTE4NzE1y4GuKA==: 00:16:15.578 20:25:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:15.578 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:15.578 20:25:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:16:15.578 20:25:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:15.578 20:25:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.578 20:25:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:15.578 20:25:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:15.578 20:25:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:15.578 20:25:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:15.838 20:25:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:16:15.838 20:25:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:15.838 20:25:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:15.838 20:25:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:15.838 20:25:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:15.838 20:25:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:15.838 20:25:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:15.838 20:25:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:15.838 20:25:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.838 20:25:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:15.838 20:25:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:15.838 20:25:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:16.405 00:16:16.405 20:25:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:16.405 20:25:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:16.405 20:25:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:16.405 20:25:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:16.405 20:25:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:16.405 20:25:29 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:16.405 20:25:29 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.405 20:25:29 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:16.405 20:25:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:16.405 { 00:16:16.405 "cntlid": 141, 00:16:16.405 "qid": 0, 00:16:16.405 "state": "enabled", 00:16:16.405 "listen_address": { 00:16:16.405 "trtype": "RDMA", 00:16:16.405 "adrfam": "IPv4", 00:16:16.405 "traddr": "192.168.100.8", 00:16:16.405 "trsvcid": "4420" 00:16:16.405 }, 00:16:16.405 "peer_address": { 00:16:16.405 "trtype": "RDMA", 00:16:16.405 "adrfam": "IPv4", 00:16:16.405 "traddr": "192.168.100.8", 00:16:16.405 "trsvcid": "43878" 00:16:16.405 }, 00:16:16.405 "auth": { 00:16:16.405 "state": "completed", 00:16:16.405 "digest": "sha512", 00:16:16.405 "dhgroup": "ffdhe8192" 00:16:16.405 } 00:16:16.405 } 00:16:16.405 ]' 00:16:16.405 20:25:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:16.405 20:25:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:16.405 20:25:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:16.664 20:25:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:16.664 20:25:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:16.664 20:25:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:16.664 20:25:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:16.664 20:25:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:16.664 20:25:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:02:NTU4MTMyMjlkZTBkYWZjNTMyMWI1NDVkMDhlYjY4YjcyYWIwMDNmN2NiMDk0MmRm0wFoZA==: --dhchap-ctrl-secret DHHC-1:01:YmYzNzA3ZjY2NDhmMjlkOTY3MzE1NGRmNDMzZGYxMWbOAb25: 00:16:17.601 20:25:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:17.601 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:17.601 20:25:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:16:17.601 20:25:30 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:17.601 20:25:30 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.601 20:25:30 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:17.601 20:25:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:17.601 20:25:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:17.601 20:25:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:17.860 20:25:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:16:17.860 20:25:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:17.860 20:25:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:17.860 20:25:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:17.860 20:25:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:17.860 20:25:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:17.860 20:25:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key3 00:16:17.860 20:25:30 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:17.860 20:25:30 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.860 20:25:30 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:17.860 20:25:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:17.860 20:25:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:18.120 00:16:18.120 20:25:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:18.120 20:25:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:18.120 20:25:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:18.379 20:25:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:18.379 20:25:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:18.379 20:25:31 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:18.379 20:25:31 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.379 20:25:31 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:18.379 20:25:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:18.379 { 00:16:18.379 "cntlid": 143, 00:16:18.379 "qid": 0, 00:16:18.379 "state": "enabled", 00:16:18.379 "listen_address": { 00:16:18.379 "trtype": "RDMA", 00:16:18.379 "adrfam": "IPv4", 00:16:18.379 "traddr": "192.168.100.8", 00:16:18.379 "trsvcid": "4420" 00:16:18.379 }, 00:16:18.379 "peer_address": { 00:16:18.379 "trtype": "RDMA", 00:16:18.379 "adrfam": "IPv4", 00:16:18.379 "traddr": "192.168.100.8", 00:16:18.379 "trsvcid": "38664" 00:16:18.379 }, 00:16:18.379 "auth": { 00:16:18.379 "state": "completed", 00:16:18.379 "digest": "sha512", 00:16:18.379 "dhgroup": "ffdhe8192" 00:16:18.379 } 00:16:18.379 } 00:16:18.379 ]' 00:16:18.379 20:25:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:18.379 20:25:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:18.379 20:25:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:18.379 20:25:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:18.379 20:25:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:18.638 20:25:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:18.638 20:25:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:18.638 20:25:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:18.638 20:25:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:03:NTI1MTlmYjQwNDhmYTVhYmVkOGVjYTBlYjZlNWU1NDI2YjhkNjk2NjBhMTdlM2VhYmFlZmJmMWQwZDEwODhlNizRJFA=: 00:16:19.575 20:25:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:19.575 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:19.575 20:25:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:16:19.575 20:25:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:19.575 20:25:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.575 20:25:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:19.575 20:25:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:16:19.575 20:25:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:16:19.575 20:25:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:16:19.575 20:25:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:19.575 20:25:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:19.576 20:25:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:19.576 20:25:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:16:19.576 20:25:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:19.576 20:25:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:19.576 20:25:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:19.576 20:25:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:19.576 20:25:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:19.576 20:25:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:19.576 20:25:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:19.576 20:25:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.576 20:25:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:19.576 20:25:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:19.576 20:25:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:20.144 00:16:20.144 20:25:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:20.144 20:25:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:20.144 20:25:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:20.404 20:25:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:20.404 20:25:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:20.404 20:25:33 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:20.404 20:25:33 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.404 20:25:33 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:20.404 20:25:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:20.404 { 00:16:20.404 "cntlid": 145, 00:16:20.404 "qid": 0, 00:16:20.404 "state": "enabled", 00:16:20.404 "listen_address": { 00:16:20.404 "trtype": "RDMA", 00:16:20.404 "adrfam": "IPv4", 00:16:20.404 "traddr": "192.168.100.8", 00:16:20.404 "trsvcid": "4420" 00:16:20.404 }, 00:16:20.404 "peer_address": { 00:16:20.404 "trtype": "RDMA", 00:16:20.404 "adrfam": "IPv4", 00:16:20.404 "traddr": "192.168.100.8", 00:16:20.404 "trsvcid": "41555" 00:16:20.404 }, 00:16:20.404 "auth": { 00:16:20.404 "state": "completed", 00:16:20.404 "digest": "sha512", 00:16:20.404 "dhgroup": "ffdhe8192" 00:16:20.404 } 00:16:20.404 } 00:16:20.404 ]' 00:16:20.404 20:25:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:20.404 20:25:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:20.404 20:25:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:20.404 20:25:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:20.404 20:25:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:20.404 20:25:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:20.404 20:25:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:20.404 20:25:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:20.663 20:25:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:00:ZWFkZmI0NTkzNzAwZDdkMDY5MjAzNTJmZTI3OTE3MmEwZWVjZjgxMGU0NmI4NTAyYxRRfg==: --dhchap-ctrl-secret DHHC-1:03:YWQ1OTU2YWQ3ZTcwNDFlNTk0NWU3MWU3MTMxMzYyMmIwNWIwZjc3YmFiMjkwNjQ1ZmI4MmY4NzRlZTZlNzIyOGZgvG4=: 00:16:21.230 20:25:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:21.230 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:21.230 20:25:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:16:21.231 20:25:34 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:21.231 20:25:34 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.231 20:25:34 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:21.231 20:25:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key1 00:16:21.231 20:25:34 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:21.231 20:25:34 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.490 20:25:34 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:21.490 20:25:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:16:21.490 20:25:34 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:16:21.490 20:25:34 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:16:21.490 20:25:34 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:16:21.490 20:25:34 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:21.490 20:25:34 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:16:21.490 20:25:34 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:21.490 20:25:34 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:16:21.490 20:25:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:16:53.655 request: 00:16:53.655 { 00:16:53.655 "name": "nvme0", 00:16:53.655 "trtype": "rdma", 00:16:53.655 "traddr": "192.168.100.8", 00:16:53.655 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562", 00:16:53.655 "adrfam": "ipv4", 00:16:53.655 "trsvcid": "4420", 00:16:53.655 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:16:53.655 "dhchap_key": "key2", 00:16:53.655 "method": "bdev_nvme_attach_controller", 00:16:53.655 "req_id": 1 00:16:53.655 } 00:16:53.655 Got JSON-RPC error response 00:16:53.655 response: 00:16:53.655 { 00:16:53.655 "code": -32602, 00:16:53.655 "message": "Invalid parameters" 00:16:53.655 } 00:16:53.655 20:26:04 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:16:53.655 20:26:04 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:53.655 20:26:04 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:53.655 20:26:04 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:53.655 20:26:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:16:53.655 20:26:04 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:53.655 20:26:04 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.655 20:26:04 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:53.655 20:26:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:53.655 20:26:04 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:53.655 20:26:04 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.655 20:26:04 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:53.655 20:26:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:16:53.655 20:26:04 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:16:53.655 20:26:04 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:16:53.655 20:26:04 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:16:53.655 20:26:04 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:53.655 20:26:04 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:16:53.655 20:26:04 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:53.655 20:26:04 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:16:53.655 20:26:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:16:53.655 request: 00:16:53.655 { 00:16:53.655 "name": "nvme0", 00:16:53.655 "trtype": "rdma", 00:16:53.655 "traddr": "192.168.100.8", 00:16:53.655 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562", 00:16:53.655 "adrfam": "ipv4", 00:16:53.655 "trsvcid": "4420", 00:16:53.655 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:16:53.655 "dhchap_key": "key1", 00:16:53.655 "dhchap_ctrlr_key": "ckey2", 00:16:53.655 "method": "bdev_nvme_attach_controller", 00:16:53.655 "req_id": 1 00:16:53.655 } 00:16:53.655 Got JSON-RPC error response 00:16:53.655 response: 00:16:53.655 { 00:16:53.655 "code": -32602, 00:16:53.655 "message": "Invalid parameters" 00:16:53.655 } 00:16:53.655 20:26:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:16:53.655 20:26:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:53.655 20:26:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:53.655 20:26:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:53.655 20:26:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:16:53.655 20:26:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:53.655 20:26:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.655 20:26:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:53.655 20:26:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key1 00:16:53.655 20:26:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:53.655 20:26:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.655 20:26:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:53.655 20:26:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:53.655 20:26:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:16:53.656 20:26:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:53.656 20:26:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:16:53.656 20:26:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:53.656 20:26:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:16:53.656 20:26:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:53.656 20:26:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:53.656 20:26:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:25.737 request: 00:17:25.737 { 00:17:25.737 "name": "nvme0", 00:17:25.737 "trtype": "rdma", 00:17:25.737 "traddr": "192.168.100.8", 00:17:25.737 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562", 00:17:25.737 "adrfam": "ipv4", 00:17:25.737 "trsvcid": "4420", 00:17:25.737 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:25.737 "dhchap_key": "key1", 00:17:25.737 "dhchap_ctrlr_key": "ckey1", 00:17:25.737 "method": "bdev_nvme_attach_controller", 00:17:25.737 "req_id": 1 00:17:25.737 } 00:17:25.737 Got JSON-RPC error response 00:17:25.737 response: 00:17:25.737 { 00:17:25.737 "code": -32602, 00:17:25.737 "message": "Invalid parameters" 00:17:25.737 } 00:17:25.737 20:26:35 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:17:25.737 20:26:35 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:25.737 20:26:35 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:25.737 20:26:35 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:25.737 20:26:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:17:25.737 20:26:35 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:25.737 20:26:35 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.737 20:26:35 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:25.737 20:26:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 3037580 00:17:25.737 20:26:35 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@946 -- # '[' -z 3037580 ']' 00:17:25.737 20:26:35 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@950 -- # kill -0 3037580 00:17:25.737 20:26:35 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@951 -- # uname 00:17:25.737 20:26:35 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:25.737 20:26:35 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3037580 00:17:25.737 20:26:35 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:17:25.737 20:26:35 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:17:25.737 20:26:35 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3037580' 00:17:25.737 killing process with pid 3037580 00:17:25.737 20:26:35 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@965 -- # kill 3037580 00:17:25.737 20:26:35 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@970 -- # wait 3037580 00:17:25.737 20:26:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:17:25.737 20:26:36 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:25.737 20:26:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@720 -- # xtrace_disable 00:17:25.737 20:26:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.737 20:26:36 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=3070380 00:17:25.737 20:26:36 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 3070380 00:17:25.737 20:26:36 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:17:25.737 20:26:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@827 -- # '[' -z 3070380 ']' 00:17:25.737 20:26:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:25.737 20:26:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:25.737 20:26:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:25.737 20:26:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:25.738 20:26:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.738 20:26:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:25.738 20:26:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@860 -- # return 0 00:17:25.738 20:26:36 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:25.738 20:26:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:25.738 20:26:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.738 20:26:36 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:25.738 20:26:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:17:25.738 20:26:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 3070380 00:17:25.738 20:26:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@827 -- # '[' -z 3070380 ']' 00:17:25.738 20:26:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:25.738 20:26:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:25.738 20:26:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:25.738 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:25.738 20:26:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:25.738 20:26:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.738 20:26:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:25.738 20:26:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@860 -- # return 0 00:17:25.738 20:26:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:17:25.738 20:26:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:25.738 20:26:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.738 20:26:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:25.738 20:26:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:17:25.738 20:26:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:25.738 20:26:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:25.738 20:26:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:25.738 20:26:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:25.738 20:26:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:25.738 20:26:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key3 00:17:25.738 20:26:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:25.738 20:26:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.738 20:26:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:25.738 20:26:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:25.738 20:26:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:25.738 00:17:25.738 20:26:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:25.738 20:26:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:25.738 20:26:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:25.738 20:26:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:25.738 20:26:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:25.738 20:26:38 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:25.738 20:26:38 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.738 20:26:38 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:25.738 20:26:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:25.738 { 00:17:25.738 "cntlid": 1, 00:17:25.738 "qid": 0, 00:17:25.738 "state": "enabled", 00:17:25.738 "listen_address": { 00:17:25.738 "trtype": "RDMA", 00:17:25.738 "adrfam": "IPv4", 00:17:25.738 "traddr": "192.168.100.8", 00:17:25.738 "trsvcid": "4420" 00:17:25.738 }, 00:17:25.738 "peer_address": { 00:17:25.738 "trtype": "RDMA", 00:17:25.738 "adrfam": "IPv4", 00:17:25.738 "traddr": "192.168.100.8", 00:17:25.738 "trsvcid": "50287" 00:17:25.738 }, 00:17:25.738 "auth": { 00:17:25.738 "state": "completed", 00:17:25.738 "digest": "sha512", 00:17:25.738 "dhgroup": "ffdhe8192" 00:17:25.738 } 00:17:25.738 } 00:17:25.738 ]' 00:17:25.738 20:26:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:25.738 20:26:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:25.738 20:26:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:25.738 20:26:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:25.738 20:26:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:25.738 20:26:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:25.738 20:26:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:25.738 20:26:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:25.738 20:26:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:03:NTI1MTlmYjQwNDhmYTVhYmVkOGVjYTBlYjZlNWU1NDI2YjhkNjk2NjBhMTdlM2VhYmFlZmJmMWQwZDEwODhlNizRJFA=: 00:17:25.997 20:26:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:26.256 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:26.256 20:26:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:17:26.256 20:26:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:26.256 20:26:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.256 20:26:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:26.256 20:26:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key3 00:17:26.256 20:26:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:26.256 20:26:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.256 20:26:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:26.256 20:26:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:17:26.256 20:26:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:17:26.256 20:26:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:26.256 20:26:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:17:26.256 20:26:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:26.256 20:26:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:17:26.256 20:26:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:26.256 20:26:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:17:26.256 20:26:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:26.256 20:26:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:26.256 20:26:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:58.339 request: 00:17:58.340 { 00:17:58.340 "name": "nvme0", 00:17:58.340 "trtype": "rdma", 00:17:58.340 "traddr": "192.168.100.8", 00:17:58.340 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562", 00:17:58.340 "adrfam": "ipv4", 00:17:58.340 "trsvcid": "4420", 00:17:58.340 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:58.340 "dhchap_key": "key3", 00:17:58.340 "method": "bdev_nvme_attach_controller", 00:17:58.340 "req_id": 1 00:17:58.340 } 00:17:58.340 Got JSON-RPC error response 00:17:58.340 response: 00:17:58.340 { 00:17:58.340 "code": -32602, 00:17:58.340 "message": "Invalid parameters" 00:17:58.340 } 00:17:58.340 20:27:09 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:17:58.340 20:27:09 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:58.340 20:27:09 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:58.340 20:27:09 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:58.340 20:27:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:17:58.340 20:27:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:17:58.340 20:27:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:17:58.340 20:27:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:17:58.340 20:27:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:58.340 20:27:09 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:17:58.340 20:27:09 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:58.340 20:27:09 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:17:58.340 20:27:09 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:58.340 20:27:09 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:17:58.340 20:27:09 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:58.340 20:27:09 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:58.340 20:27:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:30.436 request: 00:18:30.436 { 00:18:30.436 "name": "nvme0", 00:18:30.436 "trtype": "rdma", 00:18:30.436 "traddr": "192.168.100.8", 00:18:30.436 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562", 00:18:30.436 "adrfam": "ipv4", 00:18:30.436 "trsvcid": "4420", 00:18:30.436 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:30.436 "dhchap_key": "key3", 00:18:30.436 "method": "bdev_nvme_attach_controller", 00:18:30.436 "req_id": 1 00:18:30.436 } 00:18:30.436 Got JSON-RPC error response 00:18:30.436 response: 00:18:30.436 { 00:18:30.436 "code": -32602, 00:18:30.436 "message": "Invalid parameters" 00:18:30.436 } 00:18:30.436 20:27:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:18:30.436 20:27:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:30.436 20:27:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:30.436 20:27:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:30.436 20:27:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@173 -- # trap - SIGINT SIGTERM EXIT 00:18:30.436 20:27:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@174 -- # cleanup 00:18:30.436 20:27:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 3037818 00:18:30.436 20:27:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@946 -- # '[' -z 3037818 ']' 00:18:30.436 20:27:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@950 -- # kill -0 3037818 00:18:30.436 20:27:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@951 -- # uname 00:18:30.436 20:27:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:30.436 20:27:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3037818 00:18:30.436 20:27:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:18:30.436 20:27:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:18:30.436 20:27:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3037818' 00:18:30.436 killing process with pid 3037818 00:18:30.436 20:27:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@965 -- # kill 3037818 00:18:30.436 20:27:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@970 -- # wait 3037818 00:18:30.436 20:27:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:18:30.436 20:27:40 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:30.436 20:27:40 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:18:30.436 20:27:40 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:18:30.436 20:27:40 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:18:30.436 20:27:40 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:18:30.436 20:27:40 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:30.436 20:27:40 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:18:30.436 rmmod nvme_rdma 00:18:30.436 rmmod nvme_fabrics 00:18:30.436 20:27:40 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:30.436 20:27:40 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:18:30.436 20:27:40 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:18:30.436 20:27:40 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 3070380 ']' 00:18:30.436 20:27:40 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 3070380 00:18:30.436 20:27:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@946 -- # '[' -z 3070380 ']' 00:18:30.436 20:27:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@950 -- # kill -0 3070380 00:18:30.436 20:27:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@951 -- # uname 00:18:30.436 20:27:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:30.436 20:27:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3070380 00:18:30.436 20:27:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:18:30.436 20:27:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:18:30.436 20:27:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3070380' 00:18:30.436 killing process with pid 3070380 00:18:30.436 20:27:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@965 -- # kill 3070380 00:18:30.436 20:27:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@970 -- # wait 3070380 00:18:30.436 20:27:40 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:30.436 20:27:40 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:18:30.436 20:27:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.bbv /tmp/spdk.key-sha256.AUk /tmp/spdk.key-sha384.8Rr /tmp/spdk.key-sha512.M2t /tmp/spdk.key-sha512.dV0 /tmp/spdk.key-sha384.gTU /tmp/spdk.key-sha256.Ulm '' /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvmf-auth.log 00:18:30.436 00:18:30.436 real 4m20.290s 00:18:30.436 user 9m21.485s 00:18:30.436 sys 0m18.979s 00:18:30.436 20:27:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@1122 -- # xtrace_disable 00:18:30.436 20:27:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.436 ************************************ 00:18:30.436 END TEST nvmf_auth_target 00:18:30.436 ************************************ 00:18:30.436 20:27:40 nvmf_rdma -- nvmf/nvmf.sh@59 -- # '[' rdma = tcp ']' 00:18:30.436 20:27:40 nvmf_rdma -- nvmf/nvmf.sh@65 -- # '[' 0 -eq 1 ']' 00:18:30.436 20:27:40 nvmf_rdma -- nvmf/nvmf.sh@71 -- # [[ phy == phy ]] 00:18:30.436 20:27:40 nvmf_rdma -- nvmf/nvmf.sh@72 -- # '[' rdma = tcp ']' 00:18:30.436 20:27:40 nvmf_rdma -- nvmf/nvmf.sh@78 -- # [[ rdma == \r\d\m\a ]] 00:18:30.436 20:27:40 nvmf_rdma -- nvmf/nvmf.sh@79 -- # run_test nvmf_device_removal test/nvmf/target/device_removal.sh --transport=rdma 00:18:30.436 20:27:40 nvmf_rdma -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:18:30.436 20:27:40 nvmf_rdma -- common/autotest_common.sh@1103 -- # xtrace_disable 00:18:30.436 20:27:40 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:18:30.436 ************************************ 00:18:30.436 START TEST nvmf_device_removal 00:18:30.436 ************************************ 00:18:30.436 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@1121 -- # test/nvmf/target/device_removal.sh --transport=rdma 00:18:30.436 * Looking for test storage... 00:18:30.436 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:18:30.436 20:27:40 nvmf_rdma.nvmf_device_removal -- target/device_removal.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh 00:18:30.436 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:18:30.436 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@34 -- # set -e 00:18:30.436 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:18:30.436 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@36 -- # shopt -s extglob 00:18:30.436 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@38 -- # '[' -z /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output ']' 00:18:30.436 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@43 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/build_config.sh ]] 00:18:30.436 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/build_config.sh 00:18:30.436 20:27:40 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:18:30.436 20:27:40 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:18:30.436 20:27:40 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:18:30.436 20:27:40 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:18:30.436 20:27:40 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:18:30.436 20:27:40 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:18:30.436 20:27:40 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:18:30.436 20:27:40 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:18:30.436 20:27:40 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:18:30.436 20:27:40 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:18:30.436 20:27:40 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:18:30.436 20:27:40 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:18:30.436 20:27:40 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:18:30.436 20:27:40 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:18:30.436 20:27:40 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:18:30.436 20:27:40 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:18:30.436 20:27:40 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:18:30.436 20:27:40 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:18:30.436 20:27:40 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk 00:18:30.436 20:27:40 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:18:30.436 20:27:40 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:18:30.436 20:27:40 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@22 -- # CONFIG_CET=n 00:18:30.436 20:27:40 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:18:30.436 20:27:40 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:18:30.436 20:27:40 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:18:30.436 20:27:40 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:18:30.436 20:27:40 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:18:30.436 20:27:40 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:18:30.436 20:27:40 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:18:30.436 20:27:40 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:18:30.436 20:27:40 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:18:30.436 20:27:40 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:18:30.436 20:27:40 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:18:30.436 20:27:40 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:18:30.436 20:27:40 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:18:30.436 20:27:40 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build 00:18:30.436 20:27:40 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:18:30.436 20:27:40 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:18:30.436 20:27:40 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:18:30.436 20:27:40 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:18:30.436 20:27:40 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:18:30.436 20:27:40 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:18:30.436 20:27:40 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:18:30.436 20:27:40 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:18:30.436 20:27:40 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:18:30.436 20:27:40 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:18:30.436 20:27:40 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:18:30.436 20:27:40 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:18:30.436 20:27:40 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:18:30.436 20:27:40 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:18:30.436 20:27:40 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:18:30.436 20:27:40 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=n 00:18:30.436 20:27:40 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:18:30.436 20:27:40 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:18:30.436 20:27:40 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:18:30.436 20:27:40 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:18:30.436 20:27:40 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:18:30.436 20:27:40 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:18:30.436 20:27:40 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:18:30.436 20:27:40 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:18:30.436 20:27:40 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:18:30.436 20:27:40 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=n 00:18:30.436 20:27:40 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:18:30.436 20:27:40 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:18:30.436 20:27:40 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:18:30.436 20:27:40 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:18:30.436 20:27:40 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=n 00:18:30.436 20:27:40 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:18:30.436 20:27:40 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:18:30.437 20:27:40 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@70 -- # CONFIG_FC=n 00:18:30.437 20:27:40 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:18:30.437 20:27:40 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:18:30.437 20:27:40 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:18:30.437 20:27:40 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:18:30.437 20:27:40 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:18:30.437 20:27:40 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:18:30.437 20:27:40 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES= 00:18:30.437 20:27:40 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:18:30.437 20:27:40 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:18:30.437 20:27:40 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:18:30.437 20:27:40 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:18:30.437 20:27:40 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:18:30.437 20:27:40 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@83 -- # CONFIG_URING=n 00:18:30.437 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@53 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/applications.sh 00:18:30.437 20:27:40 nvmf_rdma.nvmf_device_removal -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/applications.sh 00:18:30.437 20:27:40 nvmf_rdma.nvmf_device_removal -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common 00:18:30.437 20:27:40 nvmf_rdma.nvmf_device_removal -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common 00:18:30.437 20:27:40 nvmf_rdma.nvmf_device_removal -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:18:30.437 20:27:40 nvmf_rdma.nvmf_device_removal -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin 00:18:30.437 20:27:40 nvmf_rdma.nvmf_device_removal -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app 00:18:30.437 20:27:40 nvmf_rdma.nvmf_device_removal -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples 00:18:30.437 20:27:40 nvmf_rdma.nvmf_device_removal -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:18:30.437 20:27:40 nvmf_rdma.nvmf_device_removal -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:18:30.437 20:27:40 nvmf_rdma.nvmf_device_removal -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:18:30.437 20:27:40 nvmf_rdma.nvmf_device_removal -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:18:30.437 20:27:40 nvmf_rdma.nvmf_device_removal -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:18:30.437 20:27:40 nvmf_rdma.nvmf_device_removal -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:18:30.437 20:27:40 nvmf_rdma.nvmf_device_removal -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/config.h ]] 00:18:30.437 20:27:40 nvmf_rdma.nvmf_device_removal -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:18:30.437 #define SPDK_CONFIG_H 00:18:30.437 #define SPDK_CONFIG_APPS 1 00:18:30.437 #define SPDK_CONFIG_ARCH native 00:18:30.437 #undef SPDK_CONFIG_ASAN 00:18:30.437 #undef SPDK_CONFIG_AVAHI 00:18:30.437 #undef SPDK_CONFIG_CET 00:18:30.437 #define SPDK_CONFIG_COVERAGE 1 00:18:30.437 #define SPDK_CONFIG_CROSS_PREFIX 00:18:30.437 #undef SPDK_CONFIG_CRYPTO 00:18:30.437 #undef SPDK_CONFIG_CRYPTO_MLX5 00:18:30.437 #undef SPDK_CONFIG_CUSTOMOCF 00:18:30.437 #undef SPDK_CONFIG_DAOS 00:18:30.437 #define SPDK_CONFIG_DAOS_DIR 00:18:30.437 #define SPDK_CONFIG_DEBUG 1 00:18:30.437 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:18:30.437 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build 00:18:30.437 #define SPDK_CONFIG_DPDK_INC_DIR 00:18:30.437 #define SPDK_CONFIG_DPDK_LIB_DIR 00:18:30.437 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:18:30.437 #undef SPDK_CONFIG_DPDK_UADK 00:18:30.437 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk 00:18:30.437 #define SPDK_CONFIG_EXAMPLES 1 00:18:30.437 #undef SPDK_CONFIG_FC 00:18:30.437 #define SPDK_CONFIG_FC_PATH 00:18:30.437 #define SPDK_CONFIG_FIO_PLUGIN 1 00:18:30.437 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:18:30.437 #undef SPDK_CONFIG_FUSE 00:18:30.437 #undef SPDK_CONFIG_FUZZER 00:18:30.437 #define SPDK_CONFIG_FUZZER_LIB 00:18:30.437 #undef SPDK_CONFIG_GOLANG 00:18:30.437 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:18:30.437 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:18:30.437 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:18:30.437 #undef SPDK_CONFIG_HAVE_KEYUTILS 00:18:30.437 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:18:30.437 #undef SPDK_CONFIG_HAVE_LIBBSD 00:18:30.437 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:18:30.437 #define SPDK_CONFIG_IDXD 1 00:18:30.437 #undef SPDK_CONFIG_IDXD_KERNEL 00:18:30.437 #undef SPDK_CONFIG_IPSEC_MB 00:18:30.437 #define SPDK_CONFIG_IPSEC_MB_DIR 00:18:30.437 #define SPDK_CONFIG_ISAL 1 00:18:30.437 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:18:30.437 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:18:30.437 #define SPDK_CONFIG_LIBDIR 00:18:30.437 #undef SPDK_CONFIG_LTO 00:18:30.437 #define SPDK_CONFIG_MAX_LCORES 00:18:30.437 #define SPDK_CONFIG_NVME_CUSE 1 00:18:30.437 #undef SPDK_CONFIG_OCF 00:18:30.437 #define SPDK_CONFIG_OCF_PATH 00:18:30.437 #define SPDK_CONFIG_OPENSSL_PATH 00:18:30.437 #undef SPDK_CONFIG_PGO_CAPTURE 00:18:30.437 #define SPDK_CONFIG_PGO_DIR 00:18:30.437 #undef SPDK_CONFIG_PGO_USE 00:18:30.437 #define SPDK_CONFIG_PREFIX /usr/local 00:18:30.437 #undef SPDK_CONFIG_RAID5F 00:18:30.437 #undef SPDK_CONFIG_RBD 00:18:30.437 #define SPDK_CONFIG_RDMA 1 00:18:30.437 #define SPDK_CONFIG_RDMA_PROV verbs 00:18:30.437 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:18:30.437 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:18:30.437 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:18:30.437 #define SPDK_CONFIG_SHARED 1 00:18:30.437 #undef SPDK_CONFIG_SMA 00:18:30.437 #define SPDK_CONFIG_TESTS 1 00:18:30.437 #undef SPDK_CONFIG_TSAN 00:18:30.437 #define SPDK_CONFIG_UBLK 1 00:18:30.437 #define SPDK_CONFIG_UBSAN 1 00:18:30.437 #undef SPDK_CONFIG_UNIT_TESTS 00:18:30.437 #undef SPDK_CONFIG_URING 00:18:30.437 #define SPDK_CONFIG_URING_PATH 00:18:30.437 #undef SPDK_CONFIG_URING_ZNS 00:18:30.437 #undef SPDK_CONFIG_USDT 00:18:30.437 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:18:30.437 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:18:30.437 #undef SPDK_CONFIG_VFIO_USER 00:18:30.437 #define SPDK_CONFIG_VFIO_USER_DIR 00:18:30.437 #define SPDK_CONFIG_VHOST 1 00:18:30.437 #define SPDK_CONFIG_VIRTIO 1 00:18:30.437 #undef SPDK_CONFIG_VTUNE 00:18:30.437 #define SPDK_CONFIG_VTUNE_DIR 00:18:30.437 #define SPDK_CONFIG_WERROR 1 00:18:30.437 #define SPDK_CONFIG_WPDK_DIR 00:18:30.437 #undef SPDK_CONFIG_XNVME 00:18:30.437 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:18:30.437 20:27:40 nvmf_rdma.nvmf_device_removal -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:18:30.437 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:18:30.437 20:27:40 nvmf_rdma.nvmf_device_removal -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:30.437 20:27:40 nvmf_rdma.nvmf_device_removal -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:30.437 20:27:40 nvmf_rdma.nvmf_device_removal -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:30.437 20:27:40 nvmf_rdma.nvmf_device_removal -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:30.437 20:27:40 nvmf_rdma.nvmf_device_removal -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:30.437 20:27:40 nvmf_rdma.nvmf_device_removal -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:30.437 20:27:40 nvmf_rdma.nvmf_device_removal -- paths/export.sh@5 -- # export PATH 00:18:30.437 20:27:40 nvmf_rdma.nvmf_device_removal -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:30.437 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/common 00:18:30.437 20:27:40 nvmf_rdma.nvmf_device_removal -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/common 00:18:30.437 20:27:40 nvmf_rdma.nvmf_device_removal -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm 00:18:30.437 20:27:40 nvmf_rdma.nvmf_device_removal -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm 00:18:30.437 20:27:40 nvmf_rdma.nvmf_device_removal -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/../../../ 00:18:30.437 20:27:40 nvmf_rdma.nvmf_device_removal -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:18:30.437 20:27:40 nvmf_rdma.nvmf_device_removal -- pm/common@64 -- # TEST_TAG=N/A 00:18:30.437 20:27:40 nvmf_rdma.nvmf_device_removal -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-phy-autotest/spdk/.run_test_name 00:18:30.437 20:27:40 nvmf_rdma.nvmf_device_removal -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power 00:18:30.437 20:27:40 nvmf_rdma.nvmf_device_removal -- pm/common@68 -- # uname -s 00:18:30.437 20:27:40 nvmf_rdma.nvmf_device_removal -- pm/common@68 -- # PM_OS=Linux 00:18:30.437 20:27:40 nvmf_rdma.nvmf_device_removal -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:18:30.437 20:27:40 nvmf_rdma.nvmf_device_removal -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:18:30.437 20:27:40 nvmf_rdma.nvmf_device_removal -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:18:30.437 20:27:40 nvmf_rdma.nvmf_device_removal -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:18:30.437 20:27:40 nvmf_rdma.nvmf_device_removal -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:18:30.437 20:27:40 nvmf_rdma.nvmf_device_removal -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:18:30.437 20:27:40 nvmf_rdma.nvmf_device_removal -- pm/common@76 -- # SUDO[0]= 00:18:30.437 20:27:40 nvmf_rdma.nvmf_device_removal -- pm/common@76 -- # SUDO[1]='sudo -E' 00:18:30.437 20:27:40 nvmf_rdma.nvmf_device_removal -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:18:30.437 20:27:40 nvmf_rdma.nvmf_device_removal -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:18:30.437 20:27:40 nvmf_rdma.nvmf_device_removal -- pm/common@81 -- # [[ Linux == Linux ]] 00:18:30.437 20:27:40 nvmf_rdma.nvmf_device_removal -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:18:30.437 20:27:40 nvmf_rdma.nvmf_device_removal -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:18:30.437 20:27:40 nvmf_rdma.nvmf_device_removal -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:18:30.437 20:27:40 nvmf_rdma.nvmf_device_removal -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:18:30.437 20:27:40 nvmf_rdma.nvmf_device_removal -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power ]] 00:18:30.437 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@57 -- # : 0 00:18:30.437 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@58 -- # export RUN_NIGHTLY 00:18:30.437 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@61 -- # : 0 00:18:30.437 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@62 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:18:30.437 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@63 -- # : 0 00:18:30.437 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@64 -- # export SPDK_RUN_VALGRIND 00:18:30.437 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@65 -- # : 1 00:18:30.437 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@66 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:18:30.437 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@67 -- # : 0 00:18:30.437 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@68 -- # export SPDK_TEST_UNITTEST 00:18:30.437 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@69 -- # : 00:18:30.437 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@70 -- # export SPDK_TEST_AUTOBUILD 00:18:30.437 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@71 -- # : 0 00:18:30.437 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@72 -- # export SPDK_TEST_RELEASE_BUILD 00:18:30.437 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@73 -- # : 0 00:18:30.437 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@74 -- # export SPDK_TEST_ISAL 00:18:30.437 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@75 -- # : 0 00:18:30.437 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@76 -- # export SPDK_TEST_ISCSI 00:18:30.437 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@77 -- # : 0 00:18:30.437 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@78 -- # export SPDK_TEST_ISCSI_INITIATOR 00:18:30.437 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@79 -- # : 0 00:18:30.437 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@80 -- # export SPDK_TEST_NVME 00:18:30.437 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@81 -- # : 0 00:18:30.437 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@82 -- # export SPDK_TEST_NVME_PMR 00:18:30.437 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@83 -- # : 0 00:18:30.437 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@84 -- # export SPDK_TEST_NVME_BP 00:18:30.437 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@85 -- # : 1 00:18:30.437 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@86 -- # export SPDK_TEST_NVME_CLI 00:18:30.437 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@87 -- # : 0 00:18:30.437 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@88 -- # export SPDK_TEST_NVME_CUSE 00:18:30.437 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@89 -- # : 0 00:18:30.437 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@90 -- # export SPDK_TEST_NVME_FDP 00:18:30.437 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@91 -- # : 1 00:18:30.437 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@92 -- # export SPDK_TEST_NVMF 00:18:30.437 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@93 -- # : 0 00:18:30.438 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@94 -- # export SPDK_TEST_VFIOUSER 00:18:30.438 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@95 -- # : 0 00:18:30.438 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@96 -- # export SPDK_TEST_VFIOUSER_QEMU 00:18:30.438 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@97 -- # : 0 00:18:30.438 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@98 -- # export SPDK_TEST_FUZZER 00:18:30.438 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@99 -- # : 0 00:18:30.438 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@100 -- # export SPDK_TEST_FUZZER_SHORT 00:18:30.438 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@101 -- # : rdma 00:18:30.438 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@102 -- # export SPDK_TEST_NVMF_TRANSPORT 00:18:30.438 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@103 -- # : 0 00:18:30.438 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@104 -- # export SPDK_TEST_RBD 00:18:30.438 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@105 -- # : 0 00:18:30.438 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@106 -- # export SPDK_TEST_VHOST 00:18:30.438 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@107 -- # : 0 00:18:30.438 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@108 -- # export SPDK_TEST_BLOCKDEV 00:18:30.438 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@109 -- # : 0 00:18:30.438 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@110 -- # export SPDK_TEST_IOAT 00:18:30.438 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@111 -- # : 0 00:18:30.438 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@112 -- # export SPDK_TEST_BLOBFS 00:18:30.438 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@113 -- # : 0 00:18:30.438 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@114 -- # export SPDK_TEST_VHOST_INIT 00:18:30.438 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@115 -- # : 0 00:18:30.438 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@116 -- # export SPDK_TEST_LVOL 00:18:30.438 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@117 -- # : 0 00:18:30.438 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@118 -- # export SPDK_TEST_VBDEV_COMPRESS 00:18:30.438 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@119 -- # : 0 00:18:30.438 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@120 -- # export SPDK_RUN_ASAN 00:18:30.438 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@121 -- # : 1 00:18:30.438 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@122 -- # export SPDK_RUN_UBSAN 00:18:30.438 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@123 -- # : 00:18:30.438 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@124 -- # export SPDK_RUN_EXTERNAL_DPDK 00:18:30.438 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@125 -- # : 0 00:18:30.438 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@126 -- # export SPDK_RUN_NON_ROOT 00:18:30.438 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@127 -- # : 0 00:18:30.438 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@128 -- # export SPDK_TEST_CRYPTO 00:18:30.438 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@129 -- # : 0 00:18:30.438 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@130 -- # export SPDK_TEST_FTL 00:18:30.438 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@131 -- # : 0 00:18:30.438 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@132 -- # export SPDK_TEST_OCF 00:18:30.438 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@133 -- # : 0 00:18:30.438 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@134 -- # export SPDK_TEST_VMD 00:18:30.438 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@135 -- # : 0 00:18:30.438 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@136 -- # export SPDK_TEST_OPAL 00:18:30.438 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@137 -- # : 00:18:30.438 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@138 -- # export SPDK_TEST_NATIVE_DPDK 00:18:30.438 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@139 -- # : true 00:18:30.438 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@140 -- # export SPDK_AUTOTEST_X 00:18:30.438 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@141 -- # : 0 00:18:30.438 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@142 -- # export SPDK_TEST_RAID5 00:18:30.438 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@143 -- # : 0 00:18:30.438 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@144 -- # export SPDK_TEST_URING 00:18:30.438 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@145 -- # : 0 00:18:30.438 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@146 -- # export SPDK_TEST_USDT 00:18:30.438 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@147 -- # : 0 00:18:30.438 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@148 -- # export SPDK_TEST_USE_IGB_UIO 00:18:30.438 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@149 -- # : 0 00:18:30.438 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@150 -- # export SPDK_TEST_SCHEDULER 00:18:30.438 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@151 -- # : 0 00:18:30.438 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@152 -- # export SPDK_TEST_SCANBUILD 00:18:30.438 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@153 -- # : mlx5 00:18:30.438 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@154 -- # export SPDK_TEST_NVMF_NICS 00:18:30.438 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@155 -- # : 0 00:18:30.439 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@156 -- # export SPDK_TEST_SMA 00:18:30.439 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@157 -- # : 0 00:18:30.439 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@158 -- # export SPDK_TEST_DAOS 00:18:30.439 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@159 -- # : 0 00:18:30.439 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@160 -- # export SPDK_TEST_XNVME 00:18:30.439 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@161 -- # : 0 00:18:30.439 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@162 -- # export SPDK_TEST_ACCEL_DSA 00:18:30.439 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@163 -- # : 0 00:18:30.439 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@164 -- # export SPDK_TEST_ACCEL_IAA 00:18:30.439 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@166 -- # : 00:18:30.439 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@167 -- # export SPDK_TEST_FUZZER_TARGET 00:18:30.439 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@168 -- # : 0 00:18:30.439 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@169 -- # export SPDK_TEST_NVMF_MDNS 00:18:30.439 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@170 -- # : 0 00:18:30.439 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@171 -- # export SPDK_JSONRPC_GO_CLIENT 00:18:30.439 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@174 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib 00:18:30.439 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@174 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib 00:18:30.439 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@175 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib 00:18:30.439 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@175 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib 00:18:30.439 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@176 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:18:30.439 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@176 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:18:30.439 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@177 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:18:30.439 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@177 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:18:30.439 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@180 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:18:30.439 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@180 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:18:30.439 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@184 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python 00:18:30.439 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@184 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python 00:18:30.439 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@188 -- # export PYTHONDONTWRITEBYTECODE=1 00:18:30.439 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@188 -- # PYTHONDONTWRITEBYTECODE=1 00:18:30.439 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@192 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:18:30.439 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@192 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:18:30.439 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@193 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:18:30.439 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@193 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:18:30.439 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@197 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:18:30.439 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@198 -- # rm -rf /var/tmp/asan_suppression_file 00:18:30.439 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@199 -- # cat 00:18:30.439 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@235 -- # echo leak:libfuse3.so 00:18:30.439 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@237 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:18:30.439 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@237 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:18:30.439 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@239 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:18:30.439 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@239 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:18:30.439 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@241 -- # '[' -z /var/spdk/dependencies ']' 00:18:30.439 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@244 -- # export DEPENDENCY_DIR 00:18:30.439 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@248 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin 00:18:30.439 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@248 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin 00:18:30.439 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@249 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples 00:18:30.439 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@249 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples 00:18:30.439 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@252 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:18:30.439 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@252 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:18:30.439 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@253 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:18:30.439 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@253 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:18:30.439 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@255 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:18:30.439 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@255 -- # AR_TOOL=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:18:30.439 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@258 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:18:30.439 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@258 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:18:30.439 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@261 -- # '[' 0 -eq 0 ']' 00:18:30.439 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@262 -- # export valgrind= 00:18:30.439 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@262 -- # valgrind= 00:18:30.439 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@268 -- # uname -s 00:18:30.439 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@268 -- # '[' Linux = Linux ']' 00:18:30.439 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@269 -- # HUGEMEM=4096 00:18:30.439 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@270 -- # export CLEAR_HUGE=yes 00:18:30.439 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@270 -- # CLEAR_HUGE=yes 00:18:30.439 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@271 -- # [[ 0 -eq 1 ]] 00:18:30.439 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@271 -- # [[ 0 -eq 1 ]] 00:18:30.439 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@278 -- # MAKE=make 00:18:30.439 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@279 -- # MAKEFLAGS=-j96 00:18:30.439 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@295 -- # export HUGEMEM=4096 00:18:30.439 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@295 -- # HUGEMEM=4096 00:18:30.439 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@297 -- # NO_HUGE=() 00:18:30.439 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@298 -- # TEST_MODE= 00:18:30.439 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@299 -- # for i in "$@" 00:18:30.439 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@300 -- # case "$i" in 00:18:30.439 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@305 -- # TEST_TRANSPORT=rdma 00:18:30.439 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@317 -- # [[ -z 3081479 ]] 00:18:30.439 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@317 -- # kill -0 3081479 00:18:30.439 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@1676 -- # set_test_storage 2147483648 00:18:30.439 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@327 -- # [[ -v testdir ]] 00:18:30.439 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@329 -- # local requested_size=2147483648 00:18:30.439 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@330 -- # local mount target_dir 00:18:30.439 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@332 -- # local -A mounts fss sizes avails uses 00:18:30.439 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@333 -- # local source fs size avail mount use 00:18:30.439 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@335 -- # local storage_fallback storage_candidates 00:18:30.439 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@337 -- # mktemp -udt spdk.XXXXXX 00:18:30.439 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@337 -- # storage_fallback=/tmp/spdk.zJAfCO 00:18:30.439 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@342 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:18:30.439 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@344 -- # [[ -n '' ]] 00:18:30.439 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@349 -- # [[ -n '' ]] 00:18:30.439 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@354 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target /tmp/spdk.zJAfCO/tests/target /tmp/spdk.zJAfCO 00:18:30.439 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@357 -- # requested_size=2214592512 00:18:30.439 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:18:30.439 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@326 -- # df -T 00:18:30.439 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@326 -- # grep -v Filesystem 00:18:30.439 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@360 -- # mounts["$mount"]=spdk_devtmpfs 00:18:30.439 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@360 -- # fss["$mount"]=devtmpfs 00:18:30.439 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@361 -- # avails["$mount"]=67108864 00:18:30.439 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@361 -- # sizes["$mount"]=67108864 00:18:30.439 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@362 -- # uses["$mount"]=0 00:18:30.439 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:18:30.439 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@360 -- # mounts["$mount"]=/dev/pmem0 00:18:30.439 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@360 -- # fss["$mount"]=ext2 00:18:30.439 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@361 -- # avails["$mount"]=1052192768 00:18:30.439 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@361 -- # sizes["$mount"]=5284429824 00:18:30.439 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@362 -- # uses["$mount"]=4232237056 00:18:30.439 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:18:30.439 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@360 -- # mounts["$mount"]=spdk_root 00:18:30.439 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@360 -- # fss["$mount"]=overlay 00:18:30.439 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@361 -- # avails["$mount"]=183322816512 00:18:30.439 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@361 -- # sizes["$mount"]=195974324224 00:18:30.439 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@362 -- # uses["$mount"]=12651507712 00:18:30.439 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:18:30.439 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:18:30.439 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:18:30.439 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@361 -- # avails["$mount"]=97924775936 00:18:30.439 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@361 -- # sizes["$mount"]=97987162112 00:18:30.439 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@362 -- # uses["$mount"]=62386176 00:18:30.439 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:18:30.439 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:18:30.439 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:18:30.439 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@361 -- # avails["$mount"]=39171633152 00:18:30.439 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@361 -- # sizes["$mount"]=39194865664 00:18:30.439 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@362 -- # uses["$mount"]=23232512 00:18:30.439 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:18:30.440 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:18:30.440 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:18:30.440 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@361 -- # avails["$mount"]=97984802816 00:18:30.440 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@361 -- # sizes["$mount"]=97987162112 00:18:30.440 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@362 -- # uses["$mount"]=2359296 00:18:30.440 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:18:30.440 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:18:30.440 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:18:30.440 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@361 -- # avails["$mount"]=19597426688 00:18:30.440 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@361 -- # sizes["$mount"]=19597430784 00:18:30.440 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@362 -- # uses["$mount"]=4096 00:18:30.440 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:18:30.440 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@365 -- # printf '* Looking for test storage...\n' 00:18:30.440 * Looking for test storage... 00:18:30.440 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@367 -- # local target_space new_size 00:18:30.440 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@368 -- # for target_dir in "${storage_candidates[@]}" 00:18:30.440 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@371 -- # df /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:18:30.440 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@371 -- # awk '$1 !~ /Filesystem/{print $6}' 00:18:30.440 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@371 -- # mount=/ 00:18:30.440 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@373 -- # target_space=183322816512 00:18:30.440 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@374 -- # (( target_space == 0 || target_space < requested_size )) 00:18:30.440 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@377 -- # (( target_space >= requested_size )) 00:18:30.440 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@379 -- # [[ overlay == tmpfs ]] 00:18:30.440 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@379 -- # [[ overlay == ramfs ]] 00:18:30.440 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@379 -- # [[ / == / ]] 00:18:30.440 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@380 -- # new_size=14866100224 00:18:30.440 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@381 -- # (( new_size * 100 / sizes[/] > 95 )) 00:18:30.440 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@386 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:18:30.440 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@386 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:18:30.440 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@387 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:18:30.440 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:18:30.440 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@388 -- # return 0 00:18:30.440 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@1678 -- # set -o errtrace 00:18:30.440 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@1679 -- # shopt -s extdebug 00:18:30.440 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@1680 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:18:30.440 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@1682 -- # PS4=' \t $test_domain -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:18:30.440 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@1683 -- # true 00:18:30.440 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@1685 -- # xtrace_fd 00:18:30.440 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:18:30.440 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:18:30.440 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@27 -- # exec 00:18:30.440 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@29 -- # exec 00:18:30.440 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@31 -- # xtrace_restore 00:18:30.440 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:18:30.440 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:18:30.440 20:27:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@18 -- # set -x 00:18:30.440 20:27:40 nvmf_rdma.nvmf_device_removal -- target/device_removal.sh@11 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:18:30.440 20:27:40 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@7 -- # uname -s 00:18:30.440 20:27:40 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:30.440 20:27:40 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:30.440 20:27:40 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:30.440 20:27:40 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:30.440 20:27:40 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:30.440 20:27:40 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:30.440 20:27:40 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:30.440 20:27:40 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:30.440 20:27:40 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:30.440 20:27:40 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:30.440 20:27:40 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:18:30.440 20:27:40 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:18:30.440 20:27:40 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:30.440 20:27:40 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:30.440 20:27:40 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:30.440 20:27:40 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:30.440 20:27:40 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:18:30.440 20:27:40 nvmf_rdma.nvmf_device_removal -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:30.440 20:27:40 nvmf_rdma.nvmf_device_removal -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:30.440 20:27:40 nvmf_rdma.nvmf_device_removal -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:30.440 20:27:40 nvmf_rdma.nvmf_device_removal -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:30.440 20:27:40 nvmf_rdma.nvmf_device_removal -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:30.440 20:27:40 nvmf_rdma.nvmf_device_removal -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:30.440 20:27:40 nvmf_rdma.nvmf_device_removal -- paths/export.sh@5 -- # export PATH 00:18:30.440 20:27:40 nvmf_rdma.nvmf_device_removal -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:30.440 20:27:40 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@47 -- # : 0 00:18:30.440 20:27:40 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:30.440 20:27:40 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:30.440 20:27:40 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:30.440 20:27:40 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:30.440 20:27:40 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:30.440 20:27:40 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:30.440 20:27:40 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:30.440 20:27:40 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:30.440 20:27:41 nvmf_rdma.nvmf_device_removal -- target/device_removal.sh@13 -- # tgt_core_mask=0x3 00:18:30.440 20:27:41 nvmf_rdma.nvmf_device_removal -- target/device_removal.sh@14 -- # bdevperf_core_mask=0x4 00:18:30.440 20:27:41 nvmf_rdma.nvmf_device_removal -- target/device_removal.sh@15 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:30.440 20:27:41 nvmf_rdma.nvmf_device_removal -- target/device_removal.sh@16 -- # bdevperf_rpc_pid=-1 00:18:30.440 20:27:41 nvmf_rdma.nvmf_device_removal -- target/device_removal.sh@18 -- # nvmftestinit 00:18:30.440 20:27:41 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:18:30.440 20:27:41 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:30.440 20:27:41 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:30.440 20:27:41 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:30.440 20:27:41 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:30.440 20:27:41 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:30.440 20:27:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:30.440 20:27:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:30.440 20:27:41 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:30.440 20:27:41 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:30.440 20:27:41 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@285 -- # xtrace_disable 00:18:30.440 20:27:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@10 -- # set +x 00:18:34.634 20:27:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:34.634 20:27:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@291 -- # pci_devs=() 00:18:34.634 20:27:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:34.634 20:27:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:34.634 20:27:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:34.634 20:27:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:34.634 20:27:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:34.634 20:27:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@295 -- # net_devs=() 00:18:34.634 20:27:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:34.634 20:27:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@296 -- # e810=() 00:18:34.634 20:27:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@296 -- # local -ga e810 00:18:34.634 20:27:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@297 -- # x722=() 00:18:34.634 20:27:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@297 -- # local -ga x722 00:18:34.634 20:27:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@298 -- # mlx=() 00:18:34.634 20:27:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@298 -- # local -ga mlx 00:18:34.634 20:27:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:34.634 20:27:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:34.634 20:27:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:34.634 20:27:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:34.634 20:27:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:34.634 20:27:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:34.634 20:27:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:34.634 20:27:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:34.634 20:27:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:34.634 20:27:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:34.634 20:27:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:34.634 20:27:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:34.634 20:27:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:18:34.634 20:27:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:18:34.634 20:27:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:18:34.634 20:27:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:18:34.634 20:27:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:18:34.634 20:27:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:34.634 20:27:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:34.634 20:27:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:18:34.634 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:18:34.634 20:27:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:18:34.634 20:27:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:18:34.634 20:27:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:18:34.634 20:27:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:18:34.634 20:27:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:18:34.634 20:27:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:18:34.634 20:27:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:34.635 20:27:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:18:34.635 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:18:34.635 20:27:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:18:34.635 20:27:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:18:34.635 20:27:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:18:34.635 20:27:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:18:34.635 20:27:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:18:34.635 20:27:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:18:34.635 20:27:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:34.635 20:27:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:18:34.635 20:27:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:34.635 20:27:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:34.635 20:27:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:18:34.635 20:27:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:34.635 20:27:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:34.635 20:27:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:18:34.635 Found net devices under 0000:da:00.0: mlx_0_0 00:18:34.635 20:27:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:34.635 20:27:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:34.635 20:27:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:34.635 20:27:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:18:34.635 20:27:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:34.635 20:27:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:34.635 20:27:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:18:34.635 Found net devices under 0000:da:00.1: mlx_0_1 00:18:34.635 20:27:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:34.635 20:27:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:34.635 20:27:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@414 -- # is_hw=yes 00:18:34.635 20:27:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:34.635 20:27:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:18:34.635 20:27:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:18:34.635 20:27:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@420 -- # rdma_device_init 00:18:34.635 20:27:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:18:34.635 20:27:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@58 -- # uname 00:18:34.635 20:27:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:18:34.635 20:27:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@62 -- # modprobe ib_cm 00:18:34.635 20:27:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@63 -- # modprobe ib_core 00:18:34.635 20:27:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@64 -- # modprobe ib_umad 00:18:34.635 20:27:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:18:34.635 20:27:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@66 -- # modprobe iw_cm 00:18:34.635 20:27:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:18:34.635 20:27:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:18:34.635 20:27:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@502 -- # allocate_nic_ips 00:18:34.635 20:27:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:18:34.635 20:27:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@73 -- # get_rdma_if_list 00:18:34.635 20:27:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:34.635 20:27:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:18:34.635 20:27:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:18:34.635 20:27:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:34.635 20:27:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:18:34.635 20:27:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:18:34.635 20:27:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:34.635 20:27:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:18:34.635 20:27:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@104 -- # echo mlx_0_0 00:18:34.635 20:27:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@105 -- # continue 2 00:18:34.635 20:27:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:18:34.635 20:27:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:34.635 20:27:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:18:34.635 20:27:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:34.635 20:27:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:18:34.635 20:27:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@104 -- # echo mlx_0_1 00:18:34.635 20:27:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@105 -- # continue 2 00:18:34.635 20:27:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:18:34.635 20:27:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:18:34.635 20:27:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:18:34.635 20:27:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:18:34.635 20:27:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@113 -- # awk '{print $4}' 00:18:34.635 20:27:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@113 -- # cut -d/ -f1 00:18:34.635 20:27:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:18:34.635 20:27:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:18:34.635 20:27:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:18:34.635 258: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:18:34.635 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:18:34.635 altname enp218s0f0np0 00:18:34.635 altname ens818f0np0 00:18:34.635 inet 192.168.100.8/24 scope global mlx_0_0 00:18:34.635 valid_lft forever preferred_lft forever 00:18:34.635 20:27:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:18:34.635 20:27:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:18:34.635 20:27:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:18:34.635 20:27:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@113 -- # cut -d/ -f1 00:18:34.635 20:27:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:18:34.635 20:27:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@113 -- # awk '{print $4}' 00:18:34.635 20:27:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:18:34.635 20:27:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:18:34.635 20:27:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:18:34.635 259: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:18:34.635 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:18:34.635 altname enp218s0f1np1 00:18:34.635 altname ens818f1np1 00:18:34.635 inet 192.168.100.9/24 scope global mlx_0_1 00:18:34.635 valid_lft forever preferred_lft forever 00:18:34.635 20:27:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@422 -- # return 0 00:18:34.635 20:27:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:34.635 20:27:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:18:34.635 20:27:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:18:34.635 20:27:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:18:34.635 20:27:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@86 -- # get_rdma_if_list 00:18:34.635 20:27:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:34.635 20:27:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:18:34.635 20:27:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:18:34.635 20:27:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:34.635 20:27:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:18:34.635 20:27:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:18:34.635 20:27:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:34.635 20:27:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:18:34.635 20:27:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@104 -- # echo mlx_0_0 00:18:34.635 20:27:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@105 -- # continue 2 00:18:34.635 20:27:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:18:34.635 20:27:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:34.635 20:27:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:18:34.635 20:27:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:34.635 20:27:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:18:34.635 20:27:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@104 -- # echo mlx_0_1 00:18:34.635 20:27:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@105 -- # continue 2 00:18:34.635 20:27:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:18:34.635 20:27:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:18:34.635 20:27:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:18:34.635 20:27:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:18:34.635 20:27:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@113 -- # awk '{print $4}' 00:18:34.635 20:27:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@113 -- # cut -d/ -f1 00:18:34.635 20:27:47 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:18:34.635 20:27:47 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:18:34.635 20:27:47 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:18:34.635 20:27:47 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:18:34.635 20:27:47 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@113 -- # awk '{print $4}' 00:18:34.635 20:27:47 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@113 -- # cut -d/ -f1 00:18:34.635 20:27:47 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:18:34.635 192.168.100.9' 00:18:34.635 20:27:47 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:18:34.635 192.168.100.9' 00:18:34.635 20:27:47 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@457 -- # head -n 1 00:18:34.635 20:27:47 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:18:34.636 20:27:47 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@458 -- # tail -n +2 00:18:34.636 20:27:47 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:18:34.636 192.168.100.9' 00:18:34.636 20:27:47 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@458 -- # head -n 1 00:18:34.636 20:27:47 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:18:34.636 20:27:47 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:18:34.636 20:27:47 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:18:34.636 20:27:47 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:18:34.636 20:27:47 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:18:34.636 20:27:47 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:18:34.636 20:27:47 nvmf_rdma.nvmf_device_removal -- target/device_removal.sh@235 -- # BOND_NAME=bond_nvmf 00:18:34.636 20:27:47 nvmf_rdma.nvmf_device_removal -- target/device_removal.sh@236 -- # BOND_IP=10.11.11.26 00:18:34.636 20:27:47 nvmf_rdma.nvmf_device_removal -- target/device_removal.sh@237 -- # BOND_MASK=24 00:18:34.636 20:27:47 nvmf_rdma.nvmf_device_removal -- target/device_removal.sh@311 -- # run_test nvmf_device_removal_pci_remove_no_srq test_remove_and_rescan --no-srq 00:18:34.636 20:27:47 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:18:34.636 20:27:47 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@1103 -- # xtrace_disable 00:18:34.636 20:27:47 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@10 -- # set +x 00:18:34.636 ************************************ 00:18:34.636 START TEST nvmf_device_removal_pci_remove_no_srq 00:18:34.636 ************************************ 00:18:34.636 20:27:47 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@1121 -- # test_remove_and_rescan --no-srq 00:18:34.636 20:27:47 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@128 -- # nvmfappstart -m 0x3 00:18:34.636 20:27:47 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:34.636 20:27:47 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@720 -- # xtrace_disable 00:18:34.636 20:27:47 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@10 -- # set +x 00:18:34.636 20:27:47 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@481 -- # nvmfpid=3085016 00:18:34.636 20:27:47 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@482 -- # waitforlisten 3085016 00:18:34.636 20:27:47 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:18:34.636 20:27:47 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@827 -- # '[' -z 3085016 ']' 00:18:34.636 20:27:47 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:34.636 20:27:47 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:34.636 20:27:47 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:34.636 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:34.636 20:27:47 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:34.636 20:27:47 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@10 -- # set +x 00:18:34.636 [2024-05-16 20:27:47.148287] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:18:34.636 [2024-05-16 20:27:47.148327] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:34.636 EAL: No free 2048 kB hugepages reported on node 1 00:18:34.636 [2024-05-16 20:27:47.204316] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:34.636 [2024-05-16 20:27:47.282684] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:34.636 [2024-05-16 20:27:47.282720] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:34.636 [2024-05-16 20:27:47.282727] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:34.636 [2024-05-16 20:27:47.282734] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:34.636 [2024-05-16 20:27:47.282738] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:34.636 [2024-05-16 20:27:47.282779] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:34.636 [2024-05-16 20:27:47.282781] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:35.206 20:27:47 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:35.207 20:27:47 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@860 -- # return 0 00:18:35.207 20:27:47 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:35.207 20:27:47 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:35.207 20:27:47 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@10 -- # set +x 00:18:35.207 20:27:47 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:35.207 20:27:47 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@130 -- # create_subsystem_and_connect --no-srq 00:18:35.207 20:27:47 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@45 -- # local -gA netdev_nvme_dict 00:18:35.207 20:27:47 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@46 -- # netdev_nvme_dict=() 00:18:35.207 20:27:47 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@48 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 --no-srq 00:18:35.207 20:27:47 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:35.207 20:27:47 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@10 -- # set +x 00:18:35.207 [2024-05-16 20:27:48.025510] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x14952f0/0x14997e0) succeed. 00:18:35.207 [2024-05-16 20:27:48.034337] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x14967f0/0x14dae70) succeed. 00:18:35.207 20:27:48 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:35.207 20:27:48 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@49 -- # get_rdma_if_list 00:18:35.207 20:27:48 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:35.207 20:27:48 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:18:35.207 20:27:48 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:18:35.207 20:27:48 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:35.207 20:27:48 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:18:35.207 20:27:48 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:18:35.207 20:27:48 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:35.207 20:27:48 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:18:35.207 20:27:48 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@104 -- # echo mlx_0_0 00:18:35.207 20:27:48 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@105 -- # continue 2 00:18:35.207 20:27:48 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:18:35.207 20:27:48 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:35.207 20:27:48 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:18:35.207 20:27:48 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:35.207 20:27:48 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:18:35.207 20:27:48 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@104 -- # echo mlx_0_1 00:18:35.207 20:27:48 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@105 -- # continue 2 00:18:35.207 20:27:48 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@49 -- # for net_dev in $(get_rdma_if_list) 00:18:35.207 20:27:48 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@50 -- # create_subsystem_and_connect_on_netdev mlx_0_0 00:18:35.207 20:27:48 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@25 -- # local -a dev_name 00:18:35.207 20:27:48 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@27 -- # dev_name=mlx_0_0 00:18:35.207 20:27:48 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@28 -- # malloc_name=mlx_0_0 00:18:35.207 20:27:48 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@29 -- # get_subsystem_nqn mlx_0_0 00:18:35.207 20:27:48 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@21 -- # echo nqn.2016-06.io.spdk:system_mlx_0_0 00:18:35.207 20:27:48 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@29 -- # nqn=nqn.2016-06.io.spdk:system_mlx_0_0 00:18:35.207 20:27:48 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@30 -- # get_ip_address mlx_0_0 00:18:35.207 20:27:48 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:18:35.207 20:27:48 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:18:35.207 20:27:48 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@113 -- # awk '{print $4}' 00:18:35.207 20:27:48 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@113 -- # cut -d/ -f1 00:18:35.207 20:27:48 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@30 -- # ip=192.168.100.8 00:18:35.207 20:27:48 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@31 -- # serial=SPDK000mlx_0_0 00:18:35.207 20:27:48 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@33 -- # MALLOC_BDEV_SIZE=128 00:18:35.207 20:27:48 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@34 -- # MALLOC_BLOCK_SIZE=512 00:18:35.207 20:27:48 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@36 -- # rpc_cmd bdev_malloc_create 128 512 -b mlx_0_0 00:18:35.207 20:27:48 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:35.207 20:27:48 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@10 -- # set +x 00:18:35.207 20:27:48 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:35.207 20:27:48 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:system_mlx_0_0 -a -s SPDK000mlx_0_0 00:18:35.207 20:27:48 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:35.207 20:27:48 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@10 -- # set +x 00:18:35.207 20:27:48 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:35.207 20:27:48 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:system_mlx_0_0 mlx_0_0 00:18:35.207 20:27:48 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:35.207 20:27:48 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@10 -- # set +x 00:18:35.207 20:27:48 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:35.207 20:27:48 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:system_mlx_0_0 -t rdma -a 192.168.100.8 -s 4420 00:18:35.207 20:27:48 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:35.207 20:27:48 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@10 -- # set +x 00:18:35.207 [2024-05-16 20:27:48.163724] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:18:35.207 [2024-05-16 20:27:48.164085] rdma.c:3032:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:18:35.207 20:27:48 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:35.207 20:27:48 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@41 -- # return 0 00:18:35.207 20:27:48 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@50 -- # netdev_nvme_dict[$net_dev]=mlx_0_0 00:18:35.207 20:27:48 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@49 -- # for net_dev in $(get_rdma_if_list) 00:18:35.207 20:27:48 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@50 -- # create_subsystem_and_connect_on_netdev mlx_0_1 00:18:35.207 20:27:48 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@25 -- # local -a dev_name 00:18:35.207 20:27:48 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@27 -- # dev_name=mlx_0_1 00:18:35.207 20:27:48 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@28 -- # malloc_name=mlx_0_1 00:18:35.207 20:27:48 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@29 -- # get_subsystem_nqn mlx_0_1 00:18:35.207 20:27:48 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@21 -- # echo nqn.2016-06.io.spdk:system_mlx_0_1 00:18:35.207 20:27:48 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@29 -- # nqn=nqn.2016-06.io.spdk:system_mlx_0_1 00:18:35.207 20:27:48 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@30 -- # get_ip_address mlx_0_1 00:18:35.207 20:27:48 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:18:35.207 20:27:48 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:18:35.207 20:27:48 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@113 -- # awk '{print $4}' 00:18:35.207 20:27:48 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@113 -- # cut -d/ -f1 00:18:35.207 20:27:48 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@30 -- # ip=192.168.100.9 00:18:35.207 20:27:48 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@31 -- # serial=SPDK000mlx_0_1 00:18:35.207 20:27:48 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@33 -- # MALLOC_BDEV_SIZE=128 00:18:35.207 20:27:48 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@34 -- # MALLOC_BLOCK_SIZE=512 00:18:35.207 20:27:48 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@36 -- # rpc_cmd bdev_malloc_create 128 512 -b mlx_0_1 00:18:35.207 20:27:48 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:35.207 20:27:48 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@10 -- # set +x 00:18:35.467 20:27:48 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:35.467 20:27:48 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:system_mlx_0_1 -a -s SPDK000mlx_0_1 00:18:35.467 20:27:48 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:35.467 20:27:48 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@10 -- # set +x 00:18:35.467 20:27:48 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:35.467 20:27:48 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:system_mlx_0_1 mlx_0_1 00:18:35.467 20:27:48 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:35.467 20:27:48 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@10 -- # set +x 00:18:35.467 20:27:48 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:35.467 20:27:48 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:system_mlx_0_1 -t rdma -a 192.168.100.9 -s 4420 00:18:35.467 20:27:48 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:35.467 20:27:48 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@10 -- # set +x 00:18:35.467 [2024-05-16 20:27:48.251442] rdma.c:3032:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.9 port 4420 *** 00:18:35.467 20:27:48 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:35.467 20:27:48 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@41 -- # return 0 00:18:35.467 20:27:48 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@50 -- # netdev_nvme_dict[$net_dev]=mlx_0_1 00:18:35.467 20:27:48 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@53 -- # return 0 00:18:35.467 20:27:48 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@132 -- # generate_io_traffic_with_bdevperf mlx_0_0 mlx_0_1 00:18:35.467 20:27:48 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@87 -- # dev_names=('mlx_0_0' 'mlx_0_1') 00:18:35.467 20:27:48 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@87 -- # local dev_names 00:18:35.467 20:27:48 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@89 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:18:35.467 20:27:48 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@91 -- # bdevperf_pid=3085134 00:18:35.467 20:27:48 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@93 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; kill -9 $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:35.467 20:27:48 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@90 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:18:35.467 20:27:48 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@94 -- # waitforlisten 3085134 /var/tmp/bdevperf.sock 00:18:35.467 20:27:48 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@827 -- # '[' -z 3085134 ']' 00:18:35.467 20:27:48 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:35.467 20:27:48 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:35.467 20:27:48 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:35.467 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:35.467 20:27:48 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:35.467 20:27:48 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@10 -- # set +x 00:18:36.405 20:27:49 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:36.405 20:27:49 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@860 -- # return 0 00:18:36.405 20:27:49 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:18:36.405 20:27:49 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:36.405 20:27:49 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@10 -- # set +x 00:18:36.405 20:27:49 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:36.405 20:27:49 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@100 -- # for dev_name in "${dev_names[@]}" 00:18:36.405 20:27:49 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@101 -- # get_subsystem_nqn mlx_0_0 00:18:36.405 20:27:49 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@21 -- # echo nqn.2016-06.io.spdk:system_mlx_0_0 00:18:36.405 20:27:49 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@101 -- # nqn=nqn.2016-06.io.spdk:system_mlx_0_0 00:18:36.405 20:27:49 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@102 -- # get_ip_address mlx_0_0 00:18:36.405 20:27:49 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:18:36.405 20:27:49 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@113 -- # awk '{print $4}' 00:18:36.405 20:27:49 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:18:36.405 20:27:49 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@113 -- # cut -d/ -f1 00:18:36.405 20:27:49 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@102 -- # tgt_ip=192.168.100.8 00:18:36.405 20:27:49 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme_mlx_0_0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:system_mlx_0_0 -l -1 -o 1 00:18:36.405 20:27:49 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:36.405 20:27:49 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@10 -- # set +x 00:18:36.405 Nvme_mlx_0_0n1 00:18:36.405 20:27:49 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:36.405 20:27:49 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@100 -- # for dev_name in "${dev_names[@]}" 00:18:36.405 20:27:49 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@101 -- # get_subsystem_nqn mlx_0_1 00:18:36.405 20:27:49 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@21 -- # echo nqn.2016-06.io.spdk:system_mlx_0_1 00:18:36.405 20:27:49 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@101 -- # nqn=nqn.2016-06.io.spdk:system_mlx_0_1 00:18:36.405 20:27:49 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@102 -- # get_ip_address mlx_0_1 00:18:36.405 20:27:49 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:18:36.405 20:27:49 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@113 -- # cut -d/ -f1 00:18:36.405 20:27:49 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:18:36.405 20:27:49 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@113 -- # awk '{print $4}' 00:18:36.405 20:27:49 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@102 -- # tgt_ip=192.168.100.9 00:18:36.405 20:27:49 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme_mlx_0_1 -t rdma -a 192.168.100.9 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:system_mlx_0_1 -l -1 -o 1 00:18:36.405 20:27:49 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:36.405 20:27:49 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@10 -- # set +x 00:18:36.405 Nvme_mlx_0_1n1 00:18:36.405 20:27:49 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:36.405 20:27:49 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@110 -- # bdevperf_rpc_pid=3085320 00:18:36.405 20:27:49 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@112 -- # sleep 5 00:18:36.405 20:27:49 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@109 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:18:41.680 20:27:54 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@134 -- # for net_dev in "${!netdev_nvme_dict[@]}" 00:18:41.680 20:27:54 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@135 -- # nvme_dev=mlx_0_0 00:18:41.680 20:27:54 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@136 -- # get_rdma_device_name mlx_0_0 00:18:41.680 20:27:54 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@71 -- # dev_name=mlx_0_0 00:18:41.680 20:27:54 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@72 -- # get_pci_dir mlx_0_0 00:18:41.680 20:27:54 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@61 -- # dev_name=mlx_0_0 00:18:41.680 20:27:54 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@62 -- # readlink -f /sys/bus/pci/devices/0000:da:00.0/net/mlx_0_0/device 00:18:41.681 20:27:54 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@72 -- # ls /sys/devices/pci0000:d7/0000:d7:02.0/0000:da:00.0/infiniband 00:18:41.681 20:27:54 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@136 -- # rdma_dev_name=mlx5_0 00:18:41.681 20:27:54 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@137 -- # get_ip_address mlx_0_0 00:18:41.681 20:27:54 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:18:41.681 20:27:54 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:18:41.681 20:27:54 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@113 -- # awk '{print $4}' 00:18:41.681 20:27:54 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@113 -- # cut -d/ -f1 00:18:41.681 20:27:54 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@137 -- # origin_ip=192.168.100.8 00:18:41.681 20:27:54 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@138 -- # get_pci_dir mlx_0_0 00:18:41.681 20:27:54 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@61 -- # dev_name=mlx_0_0 00:18:41.681 20:27:54 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@62 -- # readlink -f /sys/bus/pci/devices/0000:da:00.0/net/mlx_0_0/device 00:18:41.681 20:27:54 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@138 -- # pci_dir=/sys/devices/pci0000:d7/0000:d7:02.0/0000:da:00.0 00:18:41.681 20:27:54 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@140 -- # check_rdma_dev_exists_in_nvmf_tgt mlx5_0 00:18:41.681 20:27:54 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@76 -- # local rdma_dev_name=mlx5_0 00:18:41.681 20:27:54 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@77 -- # rpc_cmd nvmf_get_stats 00:18:41.681 20:27:54 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@77 -- # jq -r '.poll_groups[0].transports[].devices[].name' 00:18:41.681 20:27:54 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:41.681 20:27:54 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@77 -- # grep mlx5_0 00:18:41.681 20:27:54 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@10 -- # set +x 00:18:41.681 20:27:54 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:41.681 mlx5_0 00:18:41.681 20:27:54 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@78 -- # return 0 00:18:41.681 20:27:54 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@145 -- # remove_one_nic mlx_0_0 00:18:41.681 20:27:54 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@66 -- # dev_name=mlx_0_0 00:18:41.681 20:27:54 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@67 -- # echo 1 00:18:41.681 20:27:54 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@67 -- # get_pci_dir mlx_0_0 00:18:41.681 20:27:54 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@61 -- # dev_name=mlx_0_0 00:18:41.681 20:27:54 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@62 -- # readlink -f /sys/bus/pci/devices/0000:da:00.0/net/mlx_0_0/device 00:18:41.681 [2024-05-16 20:27:54.474464] rdma.c:3577:nvmf_rdma_handle_device_removal: *NOTICE*: Port 192.168.100.8:4420 on device mlx5_0 is being removed. 00:18:41.681 [2024-05-16 20:27:54.474557] rdma_verbs.c: 113:spdk_rdma_qp_disconnect: *ERROR*: rdma_disconnect failed, errno Invalid argument (22) 00:18:41.681 [2024-05-16 20:27:54.474648] rdma_verbs.c: 113:spdk_rdma_qp_disconnect: *ERROR*: rdma_disconnect failed, errno Invalid argument (22) 00:18:41.681 [2024-05-16 20:27:54.474660] rdma.c: 859:nvmf_rdma_qpair_destroy: *WARNING*: Destroying qpair when queue depth is 66 00:18:41.681 [2024-05-16 20:27:54.474666] rdma.c: 646:nvmf_rdma_dump_qpair_contents: *ERROR*: Dumping contents of queue pair (QID 1) 00:18:41.681 [2024-05-16 20:27:54.474672] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:41.681 [2024-05-16 20:27:54.474678] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:41.681 [2024-05-16 20:27:54.474683] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:41.681 [2024-05-16 20:27:54.474688] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:41.681 [2024-05-16 20:27:54.474693] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:41.681 [2024-05-16 20:27:54.474698] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:41.681 [2024-05-16 20:27:54.474703] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:41.681 [2024-05-16 20:27:54.474708] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:41.681 [2024-05-16 20:27:54.474713] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:41.681 [2024-05-16 20:27:54.474718] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:41.681 [2024-05-16 20:27:54.474722] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:41.681 [2024-05-16 20:27:54.474727] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:41.681 [2024-05-16 20:27:54.474732] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:41.681 [2024-05-16 20:27:54.474737] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:41.681 [2024-05-16 20:27:54.474742] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:41.681 [2024-05-16 20:27:54.474751] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:41.681 [2024-05-16 20:27:54.474756] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:41.681 [2024-05-16 20:27:54.474761] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:41.681 [2024-05-16 20:27:54.474765] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:41.681 [2024-05-16 20:27:54.474770] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:41.681 [2024-05-16 20:27:54.474776] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:41.681 [2024-05-16 20:27:54.474781] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:41.681 [2024-05-16 20:27:54.474785] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:41.681 [2024-05-16 20:27:54.474790] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:41.681 [2024-05-16 20:27:54.474795] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:41.681 [2024-05-16 20:27:54.474800] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:41.681 [2024-05-16 20:27:54.474805] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:41.681 [2024-05-16 20:27:54.474809] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:41.681 [2024-05-16 20:27:54.474814] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:41.681 [2024-05-16 20:27:54.474828] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:41.681 [2024-05-16 20:27:54.474833] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:41.681 [2024-05-16 20:27:54.474837] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:41.681 [2024-05-16 20:27:54.474842] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:41.681 [2024-05-16 20:27:54.474847] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:41.681 [2024-05-16 20:27:54.474852] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:41.681 [2024-05-16 20:27:54.474857] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:41.681 [2024-05-16 20:27:54.474862] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:41.681 [2024-05-16 20:27:54.474867] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:41.681 [2024-05-16 20:27:54.474872] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:41.681 [2024-05-16 20:27:54.474877] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:41.681 [2024-05-16 20:27:54.474882] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:41.681 [2024-05-16 20:27:54.474889] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:41.681 [2024-05-16 20:27:54.474894] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:41.681 [2024-05-16 20:27:54.474899] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:41.681 [2024-05-16 20:27:54.474904] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:41.681 [2024-05-16 20:27:54.474909] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:41.681 [2024-05-16 20:27:54.474914] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:41.681 [2024-05-16 20:27:54.474918] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:41.681 [2024-05-16 20:27:54.474924] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:41.681 [2024-05-16 20:27:54.474929] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:41.681 [2024-05-16 20:27:54.474933] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:41.681 [2024-05-16 20:27:54.474938] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:41.681 [2024-05-16 20:27:54.474943] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:41.681 [2024-05-16 20:27:54.474947] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:41.681 [2024-05-16 20:27:54.474952] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:41.681 [2024-05-16 20:27:54.474957] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:41.681 [2024-05-16 20:27:54.474961] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:41.681 [2024-05-16 20:27:54.474967] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:41.681 [2024-05-16 20:27:54.474972] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:41.681 [2024-05-16 20:27:54.474977] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:41.682 [2024-05-16 20:27:54.474982] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:41.682 [2024-05-16 20:27:54.474988] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:41.682 [2024-05-16 20:27:54.474993] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:41.682 [2024-05-16 20:27:54.474998] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:41.682 [2024-05-16 20:27:54.475003] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:41.682 [2024-05-16 20:27:54.475008] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:41.682 [2024-05-16 20:27:54.475013] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:41.682 [2024-05-16 20:27:54.475017] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:41.682 [2024-05-16 20:27:54.475022] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:41.682 [2024-05-16 20:27:54.475027] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:41.682 [2024-05-16 20:27:54.475032] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:41.682 [2024-05-16 20:27:54.475036] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:41.682 [2024-05-16 20:27:54.475041] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:41.682 [2024-05-16 20:27:54.475046] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:41.682 [2024-05-16 20:27:54.475051] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:41.682 [2024-05-16 20:27:54.475056] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:41.682 [2024-05-16 20:27:54.475060] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:41.682 [2024-05-16 20:27:54.475065] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:41.682 [2024-05-16 20:27:54.475070] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:41.682 [2024-05-16 20:27:54.475075] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:41.682 [2024-05-16 20:27:54.475079] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:41.682 [2024-05-16 20:27:54.475084] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:41.682 [2024-05-16 20:27:54.475088] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:41.682 [2024-05-16 20:27:54.475093] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:41.682 [2024-05-16 20:27:54.475098] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:41.682 [2024-05-16 20:27:54.475102] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:41.682 [2024-05-16 20:27:54.475107] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:41.682 [2024-05-16 20:27:54.475112] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:41.682 [2024-05-16 20:27:54.475116] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:41.682 [2024-05-16 20:27:54.475121] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:41.682 [2024-05-16 20:27:54.475126] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:41.682 [2024-05-16 20:27:54.475131] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:41.682 [2024-05-16 20:27:54.475136] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:41.682 [2024-05-16 20:27:54.475143] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:41.682 [2024-05-16 20:27:54.475148] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:41.682 [2024-05-16 20:27:54.475153] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:41.682 [2024-05-16 20:27:54.475157] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:41.682 [2024-05-16 20:27:54.475162] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:41.682 [2024-05-16 20:27:54.475167] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:41.682 [2024-05-16 20:27:54.475173] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:41.682 [2024-05-16 20:27:54.475178] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:41.682 [2024-05-16 20:27:54.475182] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:41.682 [2024-05-16 20:27:54.475187] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:41.682 [2024-05-16 20:27:54.475193] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:41.682 [2024-05-16 20:27:54.475197] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:41.682 [2024-05-16 20:27:54.475202] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:41.682 [2024-05-16 20:27:54.475207] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:41.682 [2024-05-16 20:27:54.475211] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:41.682 [2024-05-16 20:27:54.475217] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:41.682 [2024-05-16 20:27:54.475221] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:41.682 [2024-05-16 20:27:54.475226] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:41.682 [2024-05-16 20:27:54.475231] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:41.682 [2024-05-16 20:27:54.475236] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:41.682 [2024-05-16 20:27:54.475240] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:41.682 [2024-05-16 20:27:54.475245] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:41.682 [2024-05-16 20:27:54.475250] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:41.682 [2024-05-16 20:27:54.475255] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:41.682 [2024-05-16 20:27:54.475259] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:41.682 [2024-05-16 20:27:54.475264] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:41.682 [2024-05-16 20:27:54.475268] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:41.682 [2024-05-16 20:27:54.475273] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:41.682 [2024-05-16 20:27:54.475278] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:41.682 [2024-05-16 20:27:54.475282] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:41.682 [2024-05-16 20:27:54.475287] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:41.682 [2024-05-16 20:27:54.475292] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:41.682 [2024-05-16 20:27:54.475298] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:41.682 [2024-05-16 20:27:54.475303] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:41.682 [2024-05-16 20:27:54.475308] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:41.682 [2024-05-16 20:27:54.475313] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:41.682 [2024-05-16 20:27:54.475318] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:41.682 [2024-05-16 20:27:54.475322] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:41.682 [2024-05-16 20:27:54.475327] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:48.321 20:28:00 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@147 -- # seq 1 10 00:18:48.321 20:28:00 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@147 -- # for i in $(seq 1 10) 00:18:48.321 20:28:00 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@148 -- # check_rdma_dev_exists_in_nvmf_tgt mlx5_0 00:18:48.321 20:28:00 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@76 -- # local rdma_dev_name=mlx5_0 00:18:48.321 20:28:00 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@77 -- # rpc_cmd nvmf_get_stats 00:18:48.321 20:28:00 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@77 -- # jq -r '.poll_groups[0].transports[].devices[].name' 00:18:48.321 20:28:00 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@77 -- # grep mlx5_0 00:18:48.321 20:28:00 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:48.321 20:28:00 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@10 -- # set +x 00:18:48.321 20:28:00 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:48.321 20:28:00 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@78 -- # return 1 00:18:48.321 20:28:00 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@149 -- # break 00:18:48.321 20:28:00 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@158 -- # get_rdma_dev_count_in_nvmf_tgt 00:18:48.321 20:28:00 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@82 -- # local rdma_dev_name= 00:18:48.321 20:28:00 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@83 -- # rpc_cmd nvmf_get_stats 00:18:48.321 20:28:00 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@83 -- # jq -r '.poll_groups[0].transports[].devices | length' 00:18:48.321 20:28:00 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:48.321 20:28:00 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@10 -- # set +x 00:18:48.321 20:28:00 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:48.321 20:28:00 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@158 -- # ib_count_after_remove=1 00:18:48.321 20:28:00 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@160 -- # rescan_pci 00:18:48.321 20:28:00 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@57 -- # echo 1 00:18:48.321 [2024-05-16 20:28:01.092614] rdma.c:3266:nvmf_rdma_rescan_devices: *WARNING*: Failed to init ibv device 0x1495da0, err 11. Skip rescan. 00:18:48.321 20:28:01 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@162 -- # seq 1 10 00:18:48.321 20:28:01 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@162 -- # for i in $(seq 1 10) 00:18:48.321 20:28:01 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@163 -- # ls /sys/devices/pci0000:d7/0000:d7:02.0/0000:da:00.0/net 00:18:48.321 20:28:01 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@163 -- # new_net_dev=mlx_0_0 00:18:48.321 20:28:01 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@164 -- # [[ -z mlx_0_0 ]] 00:18:48.321 20:28:01 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@166 -- # [[ mlx_0_0 != \m\l\x\_\0\_\0 ]] 00:18:48.321 20:28:01 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@171 -- # break 00:18:48.321 20:28:01 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@175 -- # [[ -z mlx_0_0 ]] 00:18:48.321 20:28:01 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@179 -- # ip link set mlx_0_0 up 00:18:48.580 [2024-05-16 20:28:01.451004] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1836ee0/0x14997e0) succeed. 00:18:48.580 [2024-05-16 20:28:01.451054] rdma.c:3319:nvmf_rdma_retry_listen_port: *ERROR*: Found new IB device but port 192.168.100.8:4420 is still failed(-1) to listen. 00:18:51.869 20:28:04 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@180 -- # get_ip_address mlx_0_0 00:18:51.869 20:28:04 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:18:51.869 20:28:04 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:18:51.869 20:28:04 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@113 -- # awk '{print $4}' 00:18:51.869 20:28:04 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@113 -- # cut -d/ -f1 00:18:51.869 20:28:04 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@180 -- # [[ -z '' ]] 00:18:51.869 20:28:04 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@181 -- # ip addr add 192.168.100.8/24 dev mlx_0_0 00:18:51.869 20:28:04 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@186 -- # seq 1 10 00:18:51.869 20:28:04 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@186 -- # for i in $(seq 1 10) 00:18:51.869 20:28:04 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@187 -- # get_rdma_dev_count_in_nvmf_tgt 00:18:51.869 20:28:04 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@82 -- # local rdma_dev_name= 00:18:51.869 20:28:04 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@83 -- # rpc_cmd nvmf_get_stats 00:18:51.869 20:28:04 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@83 -- # jq -r '.poll_groups[0].transports[].devices | length' 00:18:51.869 20:28:04 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:51.869 20:28:04 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@10 -- # set +x 00:18:51.869 [2024-05-16 20:28:04.486473] rdma.c:3032:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:18:51.869 [2024-05-16 20:28:04.486513] rdma.c:3325:nvmf_rdma_retry_listen_port: *NOTICE*: Port 192.168.100.8:4420 come back 00:18:51.869 [2024-05-16 20:28:04.486530] rdma.c:3855:nvmf_process_ib_event: *NOTICE*: Async event: GID table change 00:18:51.869 [2024-05-16 20:28:04.486541] rdma.c:3855:nvmf_process_ib_event: *NOTICE*: Async event: GID table change 00:18:51.869 20:28:04 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:51.869 20:28:04 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@187 -- # ib_count=2 00:18:51.869 20:28:04 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@188 -- # (( ib_count > ib_count_after_remove )) 00:18:51.869 20:28:04 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@189 -- # break 00:18:51.869 20:28:04 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@134 -- # for net_dev in "${!netdev_nvme_dict[@]}" 00:18:51.869 20:28:04 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@135 -- # nvme_dev=mlx_0_1 00:18:51.869 20:28:04 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@136 -- # get_rdma_device_name mlx_0_1 00:18:51.869 20:28:04 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@71 -- # dev_name=mlx_0_1 00:18:51.869 20:28:04 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@72 -- # get_pci_dir mlx_0_1 00:18:51.869 20:28:04 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@61 -- # dev_name=mlx_0_1 00:18:51.869 20:28:04 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@62 -- # readlink -f /sys/bus/pci/devices/0000:da:00.1/net/mlx_0_1/device 00:18:51.869 20:28:04 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@72 -- # ls /sys/devices/pci0000:d7/0000:d7:02.0/0000:da:00.1/infiniband 00:18:51.869 20:28:04 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@136 -- # rdma_dev_name=mlx5_1 00:18:51.869 20:28:04 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@137 -- # get_ip_address mlx_0_1 00:18:51.869 20:28:04 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:18:51.869 20:28:04 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:18:51.869 20:28:04 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@113 -- # awk '{print $4}' 00:18:51.869 20:28:04 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@113 -- # cut -d/ -f1 00:18:51.869 20:28:04 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@137 -- # origin_ip=192.168.100.9 00:18:51.869 20:28:04 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@138 -- # get_pci_dir mlx_0_1 00:18:51.869 20:28:04 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@61 -- # dev_name=mlx_0_1 00:18:51.869 20:28:04 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@62 -- # readlink -f /sys/bus/pci/devices/0000:da:00.1/net/mlx_0_1/device 00:18:51.869 20:28:04 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@138 -- # pci_dir=/sys/devices/pci0000:d7/0000:d7:02.0/0000:da:00.1 00:18:51.869 20:28:04 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@140 -- # check_rdma_dev_exists_in_nvmf_tgt mlx5_1 00:18:51.869 20:28:04 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@76 -- # local rdma_dev_name=mlx5_1 00:18:51.869 20:28:04 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@77 -- # rpc_cmd nvmf_get_stats 00:18:51.869 20:28:04 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@77 -- # jq -r '.poll_groups[0].transports[].devices[].name' 00:18:51.869 20:28:04 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:51.869 20:28:04 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@77 -- # grep mlx5_1 00:18:51.869 20:28:04 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@10 -- # set +x 00:18:51.869 20:28:04 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:51.869 mlx5_1 00:18:51.869 20:28:04 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@78 -- # return 0 00:18:51.869 20:28:04 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@145 -- # remove_one_nic mlx_0_1 00:18:51.869 20:28:04 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@66 -- # dev_name=mlx_0_1 00:18:51.869 20:28:04 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@67 -- # echo 1 00:18:51.869 20:28:04 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@67 -- # get_pci_dir mlx_0_1 00:18:51.869 20:28:04 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@61 -- # dev_name=mlx_0_1 00:18:51.869 20:28:04 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@62 -- # readlink -f /sys/bus/pci/devices/0000:da:00.1/net/mlx_0_1/device 00:18:51.869 [2024-05-16 20:28:04.639470] rdma.c:3577:nvmf_rdma_handle_device_removal: *NOTICE*: Port 192.168.100.9:4420 on device mlx5_1 is being removed. 00:18:51.869 [2024-05-16 20:28:04.639530] rdma_verbs.c: 113:spdk_rdma_qp_disconnect: *ERROR*: rdma_disconnect failed, errno Invalid argument (22) 00:18:51.869 [2024-05-16 20:28:04.649242] rdma_verbs.c: 113:spdk_rdma_qp_disconnect: *ERROR*: rdma_disconnect failed, errno Invalid argument (22) 00:18:51.869 [2024-05-16 20:28:04.649259] rdma.c: 859:nvmf_rdma_qpair_destroy: *WARNING*: Destroying qpair when queue depth is 97 00:18:51.869 [2024-05-16 20:28:04.649265] rdma.c: 646:nvmf_rdma_dump_qpair_contents: *ERROR*: Dumping contents of queue pair (QID 1) 00:18:51.869 [2024-05-16 20:28:04.649272] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:51.869 [2024-05-16 20:28:04.649280] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:51.869 [2024-05-16 20:28:04.649286] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:51.869 [2024-05-16 20:28:04.649291] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:18:51.869 [2024-05-16 20:28:04.649297] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:51.869 [2024-05-16 20:28:04.649302] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:51.869 [2024-05-16 20:28:04.649307] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:51.869 [2024-05-16 20:28:04.649312] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:51.869 [2024-05-16 20:28:04.649317] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:51.869 [2024-05-16 20:28:04.649323] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:51.869 [2024-05-16 20:28:04.649328] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:51.869 [2024-05-16 20:28:04.649333] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:18:51.870 [2024-05-16 20:28:04.649339] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:51.870 [2024-05-16 20:28:04.649344] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:51.870 [2024-05-16 20:28:04.649349] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:51.870 [2024-05-16 20:28:04.649354] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:51.870 [2024-05-16 20:28:04.649359] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:51.870 [2024-05-16 20:28:04.649364] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:18:51.870 [2024-05-16 20:28:04.649369] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:51.870 [2024-05-16 20:28:04.649374] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:51.870 [2024-05-16 20:28:04.649380] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:51.870 [2024-05-16 20:28:04.649384] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:18:51.870 [2024-05-16 20:28:04.649389] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:51.870 [2024-05-16 20:28:04.649395] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:18:51.870 [2024-05-16 20:28:04.649400] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:51.870 [2024-05-16 20:28:04.649405] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:51.870 [2024-05-16 20:28:04.649410] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:51.870 [2024-05-16 20:28:04.649416] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:18:51.870 [2024-05-16 20:28:04.649425] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:51.870 [2024-05-16 20:28:04.649431] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:51.870 [2024-05-16 20:28:04.649436] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:51.870 [2024-05-16 20:28:04.649442] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:51.870 [2024-05-16 20:28:04.649448] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:51.870 [2024-05-16 20:28:04.649453] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:51.870 [2024-05-16 20:28:04.649458] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:51.870 [2024-05-16 20:28:04.649464] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:51.870 [2024-05-16 20:28:04.649469] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:51.870 [2024-05-16 20:28:04.649475] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:51.870 [2024-05-16 20:28:04.649480] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:51.870 [2024-05-16 20:28:04.649485] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:51.870 [2024-05-16 20:28:04.649490] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:51.870 [2024-05-16 20:28:04.649496] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:18:51.870 [2024-05-16 20:28:04.649501] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:51.870 [2024-05-16 20:28:04.649508] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:51.870 [2024-05-16 20:28:04.649513] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:51.870 [2024-05-16 20:28:04.649519] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:51.870 [2024-05-16 20:28:04.649524] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:51.870 [2024-05-16 20:28:04.649529] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:51.870 [2024-05-16 20:28:04.649536] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:51.870 [2024-05-16 20:28:04.649541] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:51.870 [2024-05-16 20:28:04.649546] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:51.870 [2024-05-16 20:28:04.649551] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:51.870 [2024-05-16 20:28:04.649557] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:51.870 [2024-05-16 20:28:04.649563] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:18:51.870 [2024-05-16 20:28:04.649568] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:51.870 [2024-05-16 20:28:04.649573] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:51.870 [2024-05-16 20:28:04.649578] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:51.870 [2024-05-16 20:28:04.649583] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:51.870 [2024-05-16 20:28:04.649588] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:51.870 [2024-05-16 20:28:04.649594] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:51.870 [2024-05-16 20:28:04.649599] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:51.870 [2024-05-16 20:28:04.649603] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:51.870 [2024-05-16 20:28:04.649609] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:51.870 [2024-05-16 20:28:04.649614] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:51.870 [2024-05-16 20:28:04.649620] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:51.870 [2024-05-16 20:28:04.649626] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:51.870 [2024-05-16 20:28:04.649631] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:51.870 [2024-05-16 20:28:04.649636] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:18:51.870 [2024-05-16 20:28:04.649642] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:51.870 [2024-05-16 20:28:04.649647] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:18:51.870 [2024-05-16 20:28:04.649652] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:51.870 [2024-05-16 20:28:04.649657] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:51.870 [2024-05-16 20:28:04.649662] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:51.870 [2024-05-16 20:28:04.649667] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:51.870 [2024-05-16 20:28:04.649673] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:51.870 [2024-05-16 20:28:04.649678] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:18:51.870 [2024-05-16 20:28:04.649683] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:51.870 [2024-05-16 20:28:04.649689] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:51.870 [2024-05-16 20:28:04.649694] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:51.870 [2024-05-16 20:28:04.649699] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:51.870 [2024-05-16 20:28:04.649704] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:51.870 [2024-05-16 20:28:04.649709] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:51.870 [2024-05-16 20:28:04.649714] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:51.870 [2024-05-16 20:28:04.649719] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:51.870 [2024-05-16 20:28:04.649726] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:51.870 [2024-05-16 20:28:04.649731] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:18:51.870 [2024-05-16 20:28:04.649737] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:51.870 [2024-05-16 20:28:04.649742] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:51.870 [2024-05-16 20:28:04.649747] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:51.870 [2024-05-16 20:28:04.649752] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:51.870 [2024-05-16 20:28:04.649758] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:51.870 [2024-05-16 20:28:04.649763] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:51.870 [2024-05-16 20:28:04.649768] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:51.870 [2024-05-16 20:28:04.649774] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:51.870 [2024-05-16 20:28:04.649779] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:51.870 [2024-05-16 20:28:04.649785] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:51.870 [2024-05-16 20:28:04.649791] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:51.870 [2024-05-16 20:28:04.649796] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:51.870 [2024-05-16 20:28:04.649801] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:51.870 [2024-05-16 20:28:04.649806] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:51.870 [2024-05-16 20:28:04.649811] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:51.870 [2024-05-16 20:28:04.649816] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:18:51.870 [2024-05-16 20:28:04.649821] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:51.870 [2024-05-16 20:28:04.649826] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:51.870 [2024-05-16 20:28:04.649832] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:51.870 [2024-05-16 20:28:04.649837] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:18:51.870 [2024-05-16 20:28:04.649843] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:51.870 [2024-05-16 20:28:04.649848] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:18:51.870 [2024-05-16 20:28:04.649853] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:51.870 [2024-05-16 20:28:04.649858] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:51.870 [2024-05-16 20:28:04.649863] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:51.870 [2024-05-16 20:28:04.649868] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:51.870 [2024-05-16 20:28:04.649873] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:51.870 [2024-05-16 20:28:04.649878] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:51.870 [2024-05-16 20:28:04.649884] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:51.870 [2024-05-16 20:28:04.649889] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:51.870 [2024-05-16 20:28:04.649894] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:51.870 [2024-05-16 20:28:04.649899] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:51.870 [2024-05-16 20:28:04.649904] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:51.870 [2024-05-16 20:28:04.649909] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:51.870 [2024-05-16 20:28:04.649914] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:51.870 [2024-05-16 20:28:04.649919] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:18:51.870 [2024-05-16 20:28:04.649924] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:51.870 [2024-05-16 20:28:04.649929] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:51.870 [2024-05-16 20:28:04.649934] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:51.870 [2024-05-16 20:28:04.649940] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:18:51.870 [2024-05-16 20:28:04.649946] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:51.870 [2024-05-16 20:28:04.649951] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:51.870 [2024-05-16 20:28:04.649957] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:51.870 [2024-05-16 20:28:04.649962] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:51.870 [2024-05-16 20:28:04.649967] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:51.870 [2024-05-16 20:28:04.649972] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:51.870 [2024-05-16 20:28:04.649978] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:51.871 [2024-05-16 20:28:04.649984] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:51.871 [2024-05-16 20:28:04.649989] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:51.871 [2024-05-16 20:28:04.649997] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:51.871 [2024-05-16 20:28:04.650003] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:51.871 [2024-05-16 20:28:04.650011] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:18:51.871 [2024-05-16 20:28:04.650016] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:51.871 [2024-05-16 20:28:04.650023] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:18:51.871 [2024-05-16 20:28:04.650030] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:51.871 [2024-05-16 20:28:04.650035] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:51.871 [2024-05-16 20:28:04.650040] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:51.871 [2024-05-16 20:28:04.650045] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:18:51.871 [2024-05-16 20:28:04.650051] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:51.871 [2024-05-16 20:28:04.650056] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:51.871 [2024-05-16 20:28:04.650061] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:51.871 [2024-05-16 20:28:04.650066] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:18:51.871 [2024-05-16 20:28:04.650071] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:51.871 [2024-05-16 20:28:04.650076] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:18:51.871 [2024-05-16 20:28:04.650081] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:51.871 [2024-05-16 20:28:04.650087] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:51.871 [2024-05-16 20:28:04.650092] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:51.871 [2024-05-16 20:28:04.650097] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:51.871 [2024-05-16 20:28:04.650102] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:51.871 [2024-05-16 20:28:04.650107] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:18:51.871 [2024-05-16 20:28:04.650112] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:51.871 [2024-05-16 20:28:04.650117] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:51.871 [2024-05-16 20:28:04.650122] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:51.871 [2024-05-16 20:28:04.650127] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:51.871 [2024-05-16 20:28:04.650133] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:51.871 [2024-05-16 20:28:04.650138] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:51.871 [2024-05-16 20:28:04.650144] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:51.871 [2024-05-16 20:28:04.650148] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:51.871 [2024-05-16 20:28:04.650154] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:51.871 [2024-05-16 20:28:04.650159] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:51.871 [2024-05-16 20:28:04.650164] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:51.871 [2024-05-16 20:28:04.650169] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:51.871 [2024-05-16 20:28:04.650176] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:51.871 [2024-05-16 20:28:04.650181] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:18:51.871 [2024-05-16 20:28:04.650187] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:51.871 [2024-05-16 20:28:04.650192] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:51.871 [2024-05-16 20:28:04.650197] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:51.871 [2024-05-16 20:28:04.650202] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:51.871 [2024-05-16 20:28:04.650207] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:51.871 [2024-05-16 20:28:04.650212] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:51.871 [2024-05-16 20:28:04.650217] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:51.871 [2024-05-16 20:28:04.650222] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:18:51.871 [2024-05-16 20:28:04.650227] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:51.871 [2024-05-16 20:28:04.650232] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:18:51.871 [2024-05-16 20:28:04.650237] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:51.871 [2024-05-16 20:28:04.650242] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:51.871 [2024-05-16 20:28:04.650248] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:51.871 [2024-05-16 20:28:04.650253] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:18:51.871 [2024-05-16 20:28:04.650258] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:51.871 [2024-05-16 20:28:04.650263] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:51.871 [2024-05-16 20:28:04.650268] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:51.871 [2024-05-16 20:28:04.650273] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:18:51.871 [2024-05-16 20:28:04.650278] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:51.871 [2024-05-16 20:28:04.650284] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:51.871 [2024-05-16 20:28:04.650289] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:51.871 [2024-05-16 20:28:04.650295] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:18:51.871 [2024-05-16 20:28:04.650300] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:51.871 [2024-05-16 20:28:04.650307] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:58.438 20:28:10 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@147 -- # seq 1 10 00:18:58.438 20:28:10 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@147 -- # for i in $(seq 1 10) 00:18:58.438 20:28:10 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@148 -- # check_rdma_dev_exists_in_nvmf_tgt mlx5_1 00:18:58.438 20:28:10 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@76 -- # local rdma_dev_name=mlx5_1 00:18:58.438 20:28:10 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@77 -- # rpc_cmd nvmf_get_stats 00:18:58.438 20:28:10 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@77 -- # jq -r '.poll_groups[0].transports[].devices[].name' 00:18:58.438 20:28:10 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:58.438 20:28:10 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@10 -- # set +x 00:18:58.438 20:28:10 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@77 -- # grep mlx5_1 00:18:58.438 20:28:10 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:58.438 20:28:10 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@78 -- # return 1 00:18:58.438 20:28:10 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@149 -- # break 00:18:58.438 20:28:10 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@158 -- # get_rdma_dev_count_in_nvmf_tgt 00:18:58.438 20:28:10 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@82 -- # local rdma_dev_name= 00:18:58.438 20:28:10 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@83 -- # rpc_cmd nvmf_get_stats 00:18:58.438 20:28:10 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:58.438 20:28:10 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@10 -- # set +x 00:18:58.438 20:28:10 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@83 -- # jq -r '.poll_groups[0].transports[].devices | length' 00:18:58.438 20:28:10 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:58.438 20:28:10 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@158 -- # ib_count_after_remove=1 00:18:58.438 20:28:10 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@160 -- # rescan_pci 00:18:58.438 20:28:10 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@57 -- # echo 1 00:18:59.375 [2024-05-16 20:28:12.211276] rdma.c:3266:nvmf_rdma_rescan_devices: *WARNING*: Failed to init ibv device 0x14982d0, err 11. Skip rescan. 00:18:59.375 20:28:12 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@162 -- # seq 1 10 00:18:59.375 20:28:12 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@162 -- # for i in $(seq 1 10) 00:18:59.375 20:28:12 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@163 -- # ls /sys/devices/pci0000:d7/0000:d7:02.0/0000:da:00.1/net 00:18:59.376 20:28:12 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@163 -- # new_net_dev=mlx_0_1 00:18:59.376 20:28:12 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@164 -- # [[ -z mlx_0_1 ]] 00:18:59.376 20:28:12 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@166 -- # [[ mlx_0_1 != \m\l\x\_\0\_\1 ]] 00:18:59.376 20:28:12 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@171 -- # break 00:18:59.376 20:28:12 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@175 -- # [[ -z mlx_0_1 ]] 00:18:59.376 20:28:12 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@179 -- # ip link set mlx_0_1 up 00:18:59.376 20:28:12 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@180 -- # get_ip_address mlx_0_1 00:18:59.376 20:28:12 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:18:59.376 20:28:12 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:18:59.376 20:28:12 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@113 -- # awk '{print $4}' 00:18:59.376 20:28:12 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@113 -- # cut -d/ -f1 00:18:59.376 20:28:12 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@180 -- # [[ -z '' ]] 00:18:59.376 20:28:12 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@181 -- # ip addr add 192.168.100.9/24 dev mlx_0_1 00:18:59.376 20:28:12 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@186 -- # seq 1 10 00:18:59.376 20:28:12 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@186 -- # for i in $(seq 1 10) 00:18:59.376 20:28:12 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@187 -- # get_rdma_dev_count_in_nvmf_tgt 00:18:59.376 20:28:12 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@82 -- # local rdma_dev_name= 00:18:59.376 20:28:12 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@83 -- # rpc_cmd nvmf_get_stats 00:18:59.376 20:28:12 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@83 -- # jq -r '.poll_groups[0].transports[].devices | length' 00:18:59.376 20:28:12 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:59.376 20:28:12 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@10 -- # set +x 00:18:59.635 [2024-05-16 20:28:12.535426] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x16a0010/0x14dae70) succeed. 00:18:59.635 [2024-05-16 20:28:12.539434] rdma.c:3032:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.9 port 4420 *** 00:18:59.635 [2024-05-16 20:28:12.539456] rdma.c:3325:nvmf_rdma_retry_listen_port: *NOTICE*: Port 192.168.100.9:4420 come back 00:18:59.635 20:28:12 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:59.635 20:28:12 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@187 -- # ib_count=2 00:18:59.635 20:28:12 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@188 -- # (( ib_count > ib_count_after_remove )) 00:18:59.635 20:28:12 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@189 -- # break 00:18:59.635 20:28:12 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@200 -- # stop_bdevperf 00:18:59.635 20:28:12 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@116 -- # wait 3085320 00:20:07.335 0 00:20:07.335 20:29:19 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@118 -- # killprocess 3085134 00:20:07.335 20:29:19 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@946 -- # '[' -z 3085134 ']' 00:20:07.335 20:29:19 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@950 -- # kill -0 3085134 00:20:07.335 20:29:19 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@951 -- # uname 00:20:07.335 20:29:19 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:07.335 20:29:19 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3085134 00:20:07.335 20:29:19 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:20:07.335 20:29:19 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:20:07.335 20:29:19 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3085134' 00:20:07.335 killing process with pid 3085134 00:20:07.335 20:29:19 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@965 -- # kill 3085134 00:20:07.335 20:29:19 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@970 -- # wait 3085134 00:20:07.335 20:29:19 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@119 -- # bdevperf_pid= 00:20:07.335 20:29:19 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@121 -- # cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/try.txt 00:20:07.335 [2024-05-16 20:27:48.304445] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:20:07.335 [2024-05-16 20:27:48.304487] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3085134 ] 00:20:07.335 EAL: No free 2048 kB hugepages reported on node 1 00:20:07.335 [2024-05-16 20:27:48.358290] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:07.335 [2024-05-16 20:27:48.431392] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:07.335 Running I/O for 90 seconds... 00:20:07.335 [2024-05-16 20:27:54.474568] rdma_verbs.c: 113:spdk_rdma_qp_disconnect: *ERROR*: rdma_disconnect failed, errno Invalid argument (22) 00:20:07.335 [2024-05-16 20:27:54.474595] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:07.335 [2024-05-16 20:27:54.474604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32594 cdw0:16 sqhd:53b9 p:0 m:0 dnr:0 00:20:07.335 [2024-05-16 20:27:54.474613] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:07.335 [2024-05-16 20:27:54.474619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32594 cdw0:16 sqhd:53b9 p:0 m:0 dnr:0 00:20:07.335 [2024-05-16 20:27:54.474626] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:07.335 [2024-05-16 20:27:54.474632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32594 cdw0:16 sqhd:53b9 p:0 m:0 dnr:0 00:20:07.335 [2024-05-16 20:27:54.474639] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:20:07.335 [2024-05-16 20:27:54.474645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32594 cdw0:16 sqhd:53b9 p:0 m:0 dnr:0 00:20:07.335 [2024-05-16 20:27:54.475798] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:20:07.335 [2024-05-16 20:27:54.475809] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] in failed state. 00:20:07.335 [2024-05-16 20:27:54.475829] rdma_verbs.c: 113:spdk_rdma_qp_disconnect: *ERROR*: rdma_disconnect failed, errno Invalid argument (22) 00:20:07.335 [2024-05-16 20:27:54.484571] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:07.335 [2024-05-16 20:27:54.494596] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:07.335 [2024-05-16 20:27:54.504630] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:07.335 [2024-05-16 20:27:54.514655] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:07.335 [2024-05-16 20:27:54.525092] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:07.335 [2024-05-16 20:27:54.535117] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:07.336 [2024-05-16 20:27:54.545308] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:07.336 [2024-05-16 20:27:54.555664] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:07.336 [2024-05-16 20:27:54.566057] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:07.336 [2024-05-16 20:27:54.576389] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:07.336 [2024-05-16 20:27:54.586757] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:07.336 [2024-05-16 20:27:54.597134] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:07.336 [2024-05-16 20:27:54.607468] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:07.336 [2024-05-16 20:27:54.617821] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:07.336 [2024-05-16 20:27:54.628125] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:07.336 [2024-05-16 20:27:54.638407] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:07.336 [2024-05-16 20:27:54.648719] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:07.336 [2024-05-16 20:27:54.659013] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:07.336 [2024-05-16 20:27:54.669351] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:07.336 [2024-05-16 20:27:54.679762] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:07.336 [2024-05-16 20:27:54.690119] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:07.336 [2024-05-16 20:27:54.700467] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:07.336 [2024-05-16 20:27:54.710496] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:07.336 [2024-05-16 20:27:54.720523] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:07.336 [2024-05-16 20:27:54.730548] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:07.336 [2024-05-16 20:27:54.740574] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:07.336 [2024-05-16 20:27:54.750600] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:07.336 [2024-05-16 20:27:54.760627] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:07.336 [2024-05-16 20:27:54.770656] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:07.336 [2024-05-16 20:27:54.780681] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:07.336 [2024-05-16 20:27:54.790707] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:07.336 [2024-05-16 20:27:54.800734] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:07.336 [2024-05-16 20:27:54.810759] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:07.336 [2024-05-16 20:27:54.820788] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:07.336 [2024-05-16 20:27:54.830817] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:07.336 [2024-05-16 20:27:54.840857] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:07.336 [2024-05-16 20:27:54.851177] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:07.336 [2024-05-16 20:27:54.861633] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:07.336 [2024-05-16 20:27:54.872060] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:07.336 [2024-05-16 20:27:54.882454] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:07.336 [2024-05-16 20:27:54.892912] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:07.336 [2024-05-16 20:27:54.903218] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:07.336 [2024-05-16 20:27:54.913357] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:07.336 [2024-05-16 20:27:54.923628] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:07.336 [2024-05-16 20:27:54.933832] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:07.336 [2024-05-16 20:27:54.944029] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:07.336 [2024-05-16 20:27:54.954739] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:07.336 [2024-05-16 20:27:54.964767] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:07.336 [2024-05-16 20:27:54.974796] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:07.336 [2024-05-16 20:27:54.984824] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:07.336 [2024-05-16 20:27:54.994848] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:07.336 [2024-05-16 20:27:55.004876] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:07.336 [2024-05-16 20:27:55.015024] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:07.336 [2024-05-16 20:27:55.025302] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:07.336 [2024-05-16 20:27:55.035327] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:07.336 [2024-05-16 20:27:55.045353] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:07.336 [2024-05-16 20:27:55.055526] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:07.336 [2024-05-16 20:27:55.065591] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:07.336 [2024-05-16 20:27:55.076262] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:07.336 [2024-05-16 20:27:55.086457] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:07.336 [2024-05-16 20:27:55.097802] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:07.336 [2024-05-16 20:27:55.108065] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:07.336 [2024-05-16 20:27:55.118375] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:07.336 [2024-05-16 20:27:55.128717] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:07.336 [2024-05-16 20:27:55.139332] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:07.336 [2024-05-16 20:27:55.149692] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:07.336 [2024-05-16 20:27:55.160797] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:07.336 [2024-05-16 20:27:55.171129] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:07.336 [2024-05-16 20:27:55.181931] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:07.336 [2024-05-16 20:27:55.192591] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:07.336 [2024-05-16 20:27:55.203045] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:07.336 [2024-05-16 20:27:55.213476] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:07.336 [2024-05-16 20:27:55.223988] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:07.336 [2024-05-16 20:27:55.234246] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:07.336 [2024-05-16 20:27:55.244505] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:07.336 [2024-05-16 20:27:55.254523] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:07.336 [2024-05-16 20:27:55.264854] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:07.336 [2024-05-16 20:27:55.274881] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:07.336 [2024-05-16 20:27:55.285020] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:07.336 [2024-05-16 20:27:55.295372] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:07.336 [2024-05-16 20:27:55.305764] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:07.336 [2024-05-16 20:27:55.316370] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:07.336 [2024-05-16 20:27:55.326621] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:07.336 [2024-05-16 20:27:55.336919] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:07.336 [2024-05-16 20:27:55.347525] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:07.336 [2024-05-16 20:27:55.357810] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:07.336 [2024-05-16 20:27:55.367983] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:07.336 [2024-05-16 20:27:55.378167] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:07.336 [2024-05-16 20:27:55.388456] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:07.336 [2024-05-16 20:27:55.398754] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:07.336 [2024-05-16 20:27:55.409554] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:07.336 [2024-05-16 20:27:55.419582] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:07.336 [2024-05-16 20:27:55.429858] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:07.336 [2024-05-16 20:27:55.440291] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:07.336 [2024-05-16 20:27:55.450778] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:07.336 [2024-05-16 20:27:55.461079] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:07.336 [2024-05-16 20:27:55.471725] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:07.337 [2024-05-16 20:27:55.478296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:196080 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007782000 len:0x1000 key:0x180d00 00:20:07.337 [2024-05-16 20:27:55.478313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32594 cdw0:50889ae0 sqhd:f530 p:0 m:0 dnr:0 00:20:07.337 [2024-05-16 20:27:55.478328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:196088 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007780000 len:0x1000 key:0x180d00 00:20:07.337 [2024-05-16 20:27:55.478339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32594 cdw0:50889ae0 sqhd:f530 p:0 m:0 dnr:0 00:20:07.337 [2024-05-16 20:27:55.478348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:196096 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000777e000 len:0x1000 key:0x180d00 00:20:07.337 [2024-05-16 20:27:55.478355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32594 cdw0:50889ae0 sqhd:f530 p:0 m:0 dnr:0 00:20:07.337 [2024-05-16 20:27:55.478363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:196104 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000777c000 len:0x1000 key:0x180d00 00:20:07.337 [2024-05-16 20:27:55.478370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32594 cdw0:50889ae0 sqhd:f530 p:0 m:0 dnr:0 00:20:07.337 [2024-05-16 20:27:55.478378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:196112 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000777a000 len:0x1000 key:0x180d00 00:20:07.337 [2024-05-16 20:27:55.478385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32594 cdw0:50889ae0 sqhd:f530 p:0 m:0 dnr:0 00:20:07.337 [2024-05-16 20:27:55.478393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:196120 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007778000 len:0x1000 key:0x180d00 00:20:07.337 [2024-05-16 20:27:55.478399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32594 cdw0:50889ae0 sqhd:f530 p:0 m:0 dnr:0 00:20:07.337 [2024-05-16 20:27:55.478407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:196128 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007776000 len:0x1000 key:0x180d00 00:20:07.337 [2024-05-16 20:27:55.478413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32594 cdw0:50889ae0 sqhd:f530 p:0 m:0 dnr:0 00:20:07.337 [2024-05-16 20:27:55.478425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:196136 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007774000 len:0x1000 key:0x180d00 00:20:07.337 [2024-05-16 20:27:55.478432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32594 cdw0:50889ae0 sqhd:f530 p:0 m:0 dnr:0 00:20:07.337 [2024-05-16 20:27:55.478440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:196144 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007772000 len:0x1000 key:0x180d00 00:20:07.337 [2024-05-16 20:27:55.478446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32594 cdw0:50889ae0 sqhd:f530 p:0 m:0 dnr:0 00:20:07.337 [2024-05-16 20:27:55.478455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:196152 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007770000 len:0x1000 key:0x180d00 00:20:07.337 [2024-05-16 20:27:55.478461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32594 cdw0:50889ae0 sqhd:f530 p:0 m:0 dnr:0 00:20:07.337 [2024-05-16 20:27:55.478469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:196160 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000776e000 len:0x1000 key:0x180d00 00:20:07.337 [2024-05-16 20:27:55.478476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32594 cdw0:50889ae0 sqhd:f530 p:0 m:0 dnr:0 00:20:07.337 [2024-05-16 20:27:55.478484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:196168 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000776c000 len:0x1000 key:0x180d00 00:20:07.337 [2024-05-16 20:27:55.478490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32594 cdw0:50889ae0 sqhd:f530 p:0 m:0 dnr:0 00:20:07.337 [2024-05-16 20:27:55.478498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:196176 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000776a000 len:0x1000 key:0x180d00 00:20:07.337 [2024-05-16 20:27:55.478504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32594 cdw0:50889ae0 sqhd:f530 p:0 m:0 dnr:0 00:20:07.337 [2024-05-16 20:27:55.478514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:196184 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007768000 len:0x1000 key:0x180d00 00:20:07.337 [2024-05-16 20:27:55.478521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32594 cdw0:50889ae0 sqhd:f530 p:0 m:0 dnr:0 00:20:07.337 [2024-05-16 20:27:55.478529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:196192 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007766000 len:0x1000 key:0x180d00 00:20:07.337 [2024-05-16 20:27:55.478535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32594 cdw0:50889ae0 sqhd:f530 p:0 m:0 dnr:0 00:20:07.337 [2024-05-16 20:27:55.478543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:196200 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007764000 len:0x1000 key:0x180d00 00:20:07.337 [2024-05-16 20:27:55.478550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32594 cdw0:50889ae0 sqhd:f530 p:0 m:0 dnr:0 00:20:07.337 [2024-05-16 20:27:55.478558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:196208 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007762000 len:0x1000 key:0x180d00 00:20:07.337 [2024-05-16 20:27:55.478564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32594 cdw0:50889ae0 sqhd:f530 p:0 m:0 dnr:0 00:20:07.337 [2024-05-16 20:27:55.478572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:196216 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007760000 len:0x1000 key:0x180d00 00:20:07.337 [2024-05-16 20:27:55.478579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32594 cdw0:50889ae0 sqhd:f530 p:0 m:0 dnr:0 00:20:07.337 [2024-05-16 20:27:55.478587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:196224 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000775e000 len:0x1000 key:0x180d00 00:20:07.337 [2024-05-16 20:27:55.478593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32594 cdw0:50889ae0 sqhd:f530 p:0 m:0 dnr:0 00:20:07.337 [2024-05-16 20:27:55.478601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:196232 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000775c000 len:0x1000 key:0x180d00 00:20:07.337 [2024-05-16 20:27:55.478607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32594 cdw0:50889ae0 sqhd:f530 p:0 m:0 dnr:0 00:20:07.337 [2024-05-16 20:27:55.478616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:196240 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000775a000 len:0x1000 key:0x180d00 00:20:07.337 [2024-05-16 20:27:55.478622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32594 cdw0:50889ae0 sqhd:f530 p:0 m:0 dnr:0 00:20:07.337 [2024-05-16 20:27:55.478630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:196248 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007758000 len:0x1000 key:0x180d00 00:20:07.337 [2024-05-16 20:27:55.478637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32594 cdw0:50889ae0 sqhd:f530 p:0 m:0 dnr:0 00:20:07.337 [2024-05-16 20:27:55.478645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:196256 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007756000 len:0x1000 key:0x180d00 00:20:07.337 [2024-05-16 20:27:55.478652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32594 cdw0:50889ae0 sqhd:f530 p:0 m:0 dnr:0 00:20:07.337 [2024-05-16 20:27:55.478660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:196264 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007754000 len:0x1000 key:0x180d00 00:20:07.337 [2024-05-16 20:27:55.478666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32594 cdw0:50889ae0 sqhd:f530 p:0 m:0 dnr:0 00:20:07.337 [2024-05-16 20:27:55.478676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:196272 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007752000 len:0x1000 key:0x180d00 00:20:07.337 [2024-05-16 20:27:55.478682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32594 cdw0:50889ae0 sqhd:f530 p:0 m:0 dnr:0 00:20:07.337 [2024-05-16 20:27:55.478690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:196280 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007750000 len:0x1000 key:0x180d00 00:20:07.337 [2024-05-16 20:27:55.478697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32594 cdw0:50889ae0 sqhd:f530 p:0 m:0 dnr:0 00:20:07.337 [2024-05-16 20:27:55.478705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:196288 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000774e000 len:0x1000 key:0x180d00 00:20:07.337 [2024-05-16 20:27:55.478711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32594 cdw0:50889ae0 sqhd:f530 p:0 m:0 dnr:0 00:20:07.337 [2024-05-16 20:27:55.478719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:196296 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000774c000 len:0x1000 key:0x180d00 00:20:07.337 [2024-05-16 20:27:55.478725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32594 cdw0:50889ae0 sqhd:f530 p:0 m:0 dnr:0 00:20:07.337 [2024-05-16 20:27:55.478733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:196304 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000774a000 len:0x1000 key:0x180d00 00:20:07.337 [2024-05-16 20:27:55.478739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32594 cdw0:50889ae0 sqhd:f530 p:0 m:0 dnr:0 00:20:07.337 [2024-05-16 20:27:55.478747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:196312 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007748000 len:0x1000 key:0x180d00 00:20:07.337 [2024-05-16 20:27:55.478753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32594 cdw0:50889ae0 sqhd:f530 p:0 m:0 dnr:0 00:20:07.337 [2024-05-16 20:27:55.478762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:196320 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007746000 len:0x1000 key:0x180d00 00:20:07.337 [2024-05-16 20:27:55.478768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32594 cdw0:50889ae0 sqhd:f530 p:0 m:0 dnr:0 00:20:07.337 [2024-05-16 20:27:55.478777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:196328 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007744000 len:0x1000 key:0x180d00 00:20:07.337 [2024-05-16 20:27:55.478784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32594 cdw0:50889ae0 sqhd:f530 p:0 m:0 dnr:0 00:20:07.337 [2024-05-16 20:27:55.478792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:196336 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007742000 len:0x1000 key:0x180d00 00:20:07.337 [2024-05-16 20:27:55.478798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32594 cdw0:50889ae0 sqhd:f530 p:0 m:0 dnr:0 00:20:07.338 [2024-05-16 20:27:55.478807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:196344 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007740000 len:0x1000 key:0x180d00 00:20:07.338 [2024-05-16 20:27:55.478816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32594 cdw0:50889ae0 sqhd:f530 p:0 m:0 dnr:0 00:20:07.338 [2024-05-16 20:27:55.478824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:196352 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000773e000 len:0x1000 key:0x180d00 00:20:07.338 [2024-05-16 20:27:55.478831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32594 cdw0:50889ae0 sqhd:f530 p:0 m:0 dnr:0 00:20:07.338 [2024-05-16 20:27:55.478841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:196360 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000773c000 len:0x1000 key:0x180d00 00:20:07.338 [2024-05-16 20:27:55.478847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32594 cdw0:50889ae0 sqhd:f530 p:0 m:0 dnr:0 00:20:07.338 [2024-05-16 20:27:55.478855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:196368 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000773a000 len:0x1000 key:0x180d00 00:20:07.338 [2024-05-16 20:27:55.478861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32594 cdw0:50889ae0 sqhd:f530 p:0 m:0 dnr:0 00:20:07.338 [2024-05-16 20:27:55.478870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:196376 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007738000 len:0x1000 key:0x180d00 00:20:07.338 [2024-05-16 20:27:55.478876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32594 cdw0:50889ae0 sqhd:f530 p:0 m:0 dnr:0 00:20:07.338 [2024-05-16 20:27:55.478883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:196384 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007736000 len:0x1000 key:0x180d00 00:20:07.338 [2024-05-16 20:27:55.478890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32594 cdw0:50889ae0 sqhd:f530 p:0 m:0 dnr:0 00:20:07.338 [2024-05-16 20:27:55.478898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:196392 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007734000 len:0x1000 key:0x180d00 00:20:07.338 [2024-05-16 20:27:55.478904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32594 cdw0:50889ae0 sqhd:f530 p:0 m:0 dnr:0 00:20:07.338 [2024-05-16 20:27:55.478912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:196400 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007732000 len:0x1000 key:0x180d00 00:20:07.338 [2024-05-16 20:27:55.478919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32594 cdw0:50889ae0 sqhd:f530 p:0 m:0 dnr:0 00:20:07.338 [2024-05-16 20:27:55.478926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:196408 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007730000 len:0x1000 key:0x180d00 00:20:07.338 [2024-05-16 20:27:55.478932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32594 cdw0:50889ae0 sqhd:f530 p:0 m:0 dnr:0 00:20:07.338 [2024-05-16 20:27:55.478940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:196416 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000772e000 len:0x1000 key:0x180d00 00:20:07.338 [2024-05-16 20:27:55.478946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32594 cdw0:50889ae0 sqhd:f530 p:0 m:0 dnr:0 00:20:07.338 [2024-05-16 20:27:55.478954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:196424 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000772c000 len:0x1000 key:0x180d00 00:20:07.338 [2024-05-16 20:27:55.478961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32594 cdw0:50889ae0 sqhd:f530 p:0 m:0 dnr:0 00:20:07.338 [2024-05-16 20:27:55.478968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:196432 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000772a000 len:0x1000 key:0x180d00 00:20:07.338 [2024-05-16 20:27:55.478975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32594 cdw0:50889ae0 sqhd:f530 p:0 m:0 dnr:0 00:20:07.338 [2024-05-16 20:27:55.478983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:196440 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007728000 len:0x1000 key:0x180d00 00:20:07.338 [2024-05-16 20:27:55.478989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32594 cdw0:50889ae0 sqhd:f530 p:0 m:0 dnr:0 00:20:07.338 [2024-05-16 20:27:55.478997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:196448 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007726000 len:0x1000 key:0x180d00 00:20:07.338 [2024-05-16 20:27:55.479005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32594 cdw0:50889ae0 sqhd:f530 p:0 m:0 dnr:0 00:20:07.338 [2024-05-16 20:27:55.479013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:196456 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007724000 len:0x1000 key:0x180d00 00:20:07.338 [2024-05-16 20:27:55.479021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32594 cdw0:50889ae0 sqhd:f530 p:0 m:0 dnr:0 00:20:07.338 [2024-05-16 20:27:55.479029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:196464 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007722000 len:0x1000 key:0x180d00 00:20:07.338 [2024-05-16 20:27:55.479035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32594 cdw0:50889ae0 sqhd:f530 p:0 m:0 dnr:0 00:20:07.338 [2024-05-16 20:27:55.479043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:196472 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007720000 len:0x1000 key:0x180d00 00:20:07.338 [2024-05-16 20:27:55.479049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32594 cdw0:50889ae0 sqhd:f530 p:0 m:0 dnr:0 00:20:07.338 [2024-05-16 20:27:55.479057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:196480 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000771e000 len:0x1000 key:0x180d00 00:20:07.338 [2024-05-16 20:27:55.479064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32594 cdw0:50889ae0 sqhd:f530 p:0 m:0 dnr:0 00:20:07.338 [2024-05-16 20:27:55.479072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:196488 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000771c000 len:0x1000 key:0x180d00 00:20:07.338 [2024-05-16 20:27:55.479079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32594 cdw0:50889ae0 sqhd:f530 p:0 m:0 dnr:0 00:20:07.338 [2024-05-16 20:27:55.479086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:196496 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000771a000 len:0x1000 key:0x180d00 00:20:07.338 [2024-05-16 20:27:55.479093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32594 cdw0:50889ae0 sqhd:f530 p:0 m:0 dnr:0 00:20:07.338 [2024-05-16 20:27:55.479101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:196504 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007718000 len:0x1000 key:0x180d00 00:20:07.338 [2024-05-16 20:27:55.479107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32594 cdw0:50889ae0 sqhd:f530 p:0 m:0 dnr:0 00:20:07.338 [2024-05-16 20:27:55.479115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:196512 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007716000 len:0x1000 key:0x180d00 00:20:07.338 [2024-05-16 20:27:55.479121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32594 cdw0:50889ae0 sqhd:f530 p:0 m:0 dnr:0 00:20:07.338 [2024-05-16 20:27:55.479129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:196520 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007714000 len:0x1000 key:0x180d00 00:20:07.338 [2024-05-16 20:27:55.479136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32594 cdw0:50889ae0 sqhd:f530 p:0 m:0 dnr:0 00:20:07.338 [2024-05-16 20:27:55.479144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:196528 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007712000 len:0x1000 key:0x180d00 00:20:07.338 [2024-05-16 20:27:55.479150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32594 cdw0:50889ae0 sqhd:f530 p:0 m:0 dnr:0 00:20:07.338 [2024-05-16 20:27:55.479158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:196536 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007710000 len:0x1000 key:0x180d00 00:20:07.338 [2024-05-16 20:27:55.479166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32594 cdw0:50889ae0 sqhd:f530 p:0 m:0 dnr:0 00:20:07.338 [2024-05-16 20:27:55.479174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:196544 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000770e000 len:0x1000 key:0x180d00 00:20:07.338 [2024-05-16 20:27:55.479180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32594 cdw0:50889ae0 sqhd:f530 p:0 m:0 dnr:0 00:20:07.338 [2024-05-16 20:27:55.479188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:196552 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000770c000 len:0x1000 key:0x180d00 00:20:07.338 [2024-05-16 20:27:55.479195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32594 cdw0:50889ae0 sqhd:f530 p:0 m:0 dnr:0 00:20:07.338 [2024-05-16 20:27:55.479202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:196560 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000770a000 len:0x1000 key:0x180d00 00:20:07.338 [2024-05-16 20:27:55.479209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32594 cdw0:50889ae0 sqhd:f530 p:0 m:0 dnr:0 00:20:07.338 [2024-05-16 20:27:55.479217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:196568 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007708000 len:0x1000 key:0x180d00 00:20:07.338 [2024-05-16 20:27:55.479223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32594 cdw0:50889ae0 sqhd:f530 p:0 m:0 dnr:0 00:20:07.338 [2024-05-16 20:27:55.479231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:196576 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007706000 len:0x1000 key:0x180d00 00:20:07.338 [2024-05-16 20:27:55.479238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32594 cdw0:50889ae0 sqhd:f530 p:0 m:0 dnr:0 00:20:07.338 [2024-05-16 20:27:55.479246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:196584 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007704000 len:0x1000 key:0x180d00 00:20:07.338 [2024-05-16 20:27:55.479252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32594 cdw0:50889ae0 sqhd:f530 p:0 m:0 dnr:0 00:20:07.338 [2024-05-16 20:27:55.479260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:196592 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007702000 len:0x1000 key:0x180d00 00:20:07.339 [2024-05-16 20:27:55.479266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32594 cdw0:50889ae0 sqhd:f530 p:0 m:0 dnr:0 00:20:07.339 [2024-05-16 20:27:55.479274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:196600 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007700000 len:0x1000 key:0x180d00 00:20:07.339 [2024-05-16 20:27:55.479281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32594 cdw0:50889ae0 sqhd:f530 p:0 m:0 dnr:0 00:20:07.339 [2024-05-16 20:27:55.479289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:196608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.339 [2024-05-16 20:27:55.479296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32594 cdw0:50889ae0 sqhd:f530 p:0 m:0 dnr:0 00:20:07.339 [2024-05-16 20:27:55.479304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:196616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.339 [2024-05-16 20:27:55.479310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32594 cdw0:50889ae0 sqhd:f530 p:0 m:0 dnr:0 00:20:07.339 [2024-05-16 20:27:55.479317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:196624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.339 [2024-05-16 20:27:55.479324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32594 cdw0:50889ae0 sqhd:f530 p:0 m:0 dnr:0 00:20:07.339 [2024-05-16 20:27:55.479336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:196632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.339 [2024-05-16 20:27:55.479343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32594 cdw0:50889ae0 sqhd:f530 p:0 m:0 dnr:0 00:20:07.339 [2024-05-16 20:27:55.479350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:196640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.339 [2024-05-16 20:27:55.479357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32594 cdw0:50889ae0 sqhd:f530 p:0 m:0 dnr:0 00:20:07.339 [2024-05-16 20:27:55.479364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:196648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.339 [2024-05-16 20:27:55.479371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32594 cdw0:50889ae0 sqhd:f530 p:0 m:0 dnr:0 00:20:07.339 [2024-05-16 20:27:55.479378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:196656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.339 [2024-05-16 20:27:55.479384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32594 cdw0:50889ae0 sqhd:f530 p:0 m:0 dnr:0 00:20:07.339 [2024-05-16 20:27:55.479392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:196664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.339 [2024-05-16 20:27:55.479398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32594 cdw0:50889ae0 sqhd:f530 p:0 m:0 dnr:0 00:20:07.339 [2024-05-16 20:27:55.479406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:196672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.339 [2024-05-16 20:27:55.479412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32594 cdw0:50889ae0 sqhd:f530 p:0 m:0 dnr:0 00:20:07.339 [2024-05-16 20:27:55.479420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:196680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.339 [2024-05-16 20:27:55.479430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32594 cdw0:50889ae0 sqhd:f530 p:0 m:0 dnr:0 00:20:07.339 [2024-05-16 20:27:55.479438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:196688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.339 [2024-05-16 20:27:55.479444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32594 cdw0:50889ae0 sqhd:f530 p:0 m:0 dnr:0 00:20:07.339 [2024-05-16 20:27:55.479452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:196696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.339 [2024-05-16 20:27:55.479458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32594 cdw0:50889ae0 sqhd:f530 p:0 m:0 dnr:0 00:20:07.339 [2024-05-16 20:27:55.479466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:196704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.339 [2024-05-16 20:27:55.479472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32594 cdw0:50889ae0 sqhd:f530 p:0 m:0 dnr:0 00:20:07.339 [2024-05-16 20:27:55.479480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:196712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.339 [2024-05-16 20:27:55.479486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32594 cdw0:50889ae0 sqhd:f530 p:0 m:0 dnr:0 00:20:07.339 [2024-05-16 20:27:55.479494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:196720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.339 [2024-05-16 20:27:55.479500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32594 cdw0:50889ae0 sqhd:f530 p:0 m:0 dnr:0 00:20:07.339 [2024-05-16 20:27:55.479509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:196728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.339 [2024-05-16 20:27:55.479517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32594 cdw0:50889ae0 sqhd:f530 p:0 m:0 dnr:0 00:20:07.339 [2024-05-16 20:27:55.479524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:196736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.339 [2024-05-16 20:27:55.479530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32594 cdw0:50889ae0 sqhd:f530 p:0 m:0 dnr:0 00:20:07.339 [2024-05-16 20:27:55.479538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:196744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.339 [2024-05-16 20:27:55.479544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32594 cdw0:50889ae0 sqhd:f530 p:0 m:0 dnr:0 00:20:07.339 [2024-05-16 20:27:55.479552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:196752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.339 [2024-05-16 20:27:55.479558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32594 cdw0:50889ae0 sqhd:f530 p:0 m:0 dnr:0 00:20:07.339 [2024-05-16 20:27:55.479566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:196760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.339 [2024-05-16 20:27:55.479572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32594 cdw0:50889ae0 sqhd:f530 p:0 m:0 dnr:0 00:20:07.339 [2024-05-16 20:27:55.479580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:196768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.339 [2024-05-16 20:27:55.479586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32594 cdw0:50889ae0 sqhd:f530 p:0 m:0 dnr:0 00:20:07.339 [2024-05-16 20:27:55.479594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:196776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.339 [2024-05-16 20:27:55.479600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32594 cdw0:50889ae0 sqhd:f530 p:0 m:0 dnr:0 00:20:07.339 [2024-05-16 20:27:55.479607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:196784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.339 [2024-05-16 20:27:55.479613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32594 cdw0:50889ae0 sqhd:f530 p:0 m:0 dnr:0 00:20:07.339 [2024-05-16 20:27:55.479622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:196792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.339 [2024-05-16 20:27:55.479628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32594 cdw0:50889ae0 sqhd:f530 p:0 m:0 dnr:0 00:20:07.339 [2024-05-16 20:27:55.479636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:196800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.339 [2024-05-16 20:27:55.479642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32594 cdw0:50889ae0 sqhd:f530 p:0 m:0 dnr:0 00:20:07.339 [2024-05-16 20:27:55.479649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:196808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.339 [2024-05-16 20:27:55.479655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32594 cdw0:50889ae0 sqhd:f530 p:0 m:0 dnr:0 00:20:07.339 [2024-05-16 20:27:55.479663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:196816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.339 [2024-05-16 20:27:55.479670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32594 cdw0:50889ae0 sqhd:f530 p:0 m:0 dnr:0 00:20:07.339 [2024-05-16 20:27:55.479678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:196824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.339 [2024-05-16 20:27:55.479685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32594 cdw0:50889ae0 sqhd:f530 p:0 m:0 dnr:0 00:20:07.339 [2024-05-16 20:27:55.479693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:196832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.339 [2024-05-16 20:27:55.479699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32594 cdw0:50889ae0 sqhd:f530 p:0 m:0 dnr:0 00:20:07.339 [2024-05-16 20:27:55.479707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:196840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.339 [2024-05-16 20:27:55.479713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32594 cdw0:50889ae0 sqhd:f530 p:0 m:0 dnr:0 00:20:07.339 [2024-05-16 20:27:55.479720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:196848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.339 [2024-05-16 20:27:55.479727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32594 cdw0:50889ae0 sqhd:f530 p:0 m:0 dnr:0 00:20:07.339 [2024-05-16 20:27:55.479734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:196856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.339 [2024-05-16 20:27:55.479741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32594 cdw0:50889ae0 sqhd:f530 p:0 m:0 dnr:0 00:20:07.339 [2024-05-16 20:27:55.479751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:196864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.339 [2024-05-16 20:27:55.479757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32594 cdw0:50889ae0 sqhd:f530 p:0 m:0 dnr:0 00:20:07.339 [2024-05-16 20:27:55.479765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:196872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.340 [2024-05-16 20:27:55.479771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32594 cdw0:50889ae0 sqhd:f530 p:0 m:0 dnr:0 00:20:07.340 [2024-05-16 20:27:55.479779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:196880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.340 [2024-05-16 20:27:55.479786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32594 cdw0:50889ae0 sqhd:f530 p:0 m:0 dnr:0 00:20:07.340 [2024-05-16 20:27:55.479794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:196888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.340 [2024-05-16 20:27:55.479800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32594 cdw0:50889ae0 sqhd:f530 p:0 m:0 dnr:0 00:20:07.340 [2024-05-16 20:27:55.479808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:196896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.340 [2024-05-16 20:27:55.479814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32594 cdw0:50889ae0 sqhd:f530 p:0 m:0 dnr:0 00:20:07.340 [2024-05-16 20:27:55.479822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:196904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.340 [2024-05-16 20:27:55.479829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32594 cdw0:50889ae0 sqhd:f530 p:0 m:0 dnr:0 00:20:07.340 [2024-05-16 20:27:55.479836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:196912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.340 [2024-05-16 20:27:55.479843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32594 cdw0:50889ae0 sqhd:f530 p:0 m:0 dnr:0 00:20:07.340 [2024-05-16 20:27:55.479850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:196920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.340 [2024-05-16 20:27:55.479858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32594 cdw0:50889ae0 sqhd:f530 p:0 m:0 dnr:0 00:20:07.340 [2024-05-16 20:27:55.479866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:196928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.340 [2024-05-16 20:27:55.479873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32594 cdw0:50889ae0 sqhd:f530 p:0 m:0 dnr:0 00:20:07.340 [2024-05-16 20:27:55.479880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:196936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.340 [2024-05-16 20:27:55.479887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32594 cdw0:50889ae0 sqhd:f530 p:0 m:0 dnr:0 00:20:07.340 [2024-05-16 20:27:55.479894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:196944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.340 [2024-05-16 20:27:55.479900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32594 cdw0:50889ae0 sqhd:f530 p:0 m:0 dnr:0 00:20:07.340 [2024-05-16 20:27:55.479908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:196952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.340 [2024-05-16 20:27:55.479914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32594 cdw0:50889ae0 sqhd:f530 p:0 m:0 dnr:0 00:20:07.340 [2024-05-16 20:27:55.479922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:196960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.340 [2024-05-16 20:27:55.479928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32594 cdw0:50889ae0 sqhd:f530 p:0 m:0 dnr:0 00:20:07.340 [2024-05-16 20:27:55.479936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:196968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.340 [2024-05-16 20:27:55.479943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32594 cdw0:50889ae0 sqhd:f530 p:0 m:0 dnr:0 00:20:07.340 [2024-05-16 20:27:55.479950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:196976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.340 [2024-05-16 20:27:55.479957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32594 cdw0:50889ae0 sqhd:f530 p:0 m:0 dnr:0 00:20:07.340 [2024-05-16 20:27:55.479964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:196984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.340 [2024-05-16 20:27:55.479971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32594 cdw0:50889ae0 sqhd:f530 p:0 m:0 dnr:0 00:20:07.340 [2024-05-16 20:27:55.479979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:196992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.340 [2024-05-16 20:27:55.479985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32594 cdw0:50889ae0 sqhd:f530 p:0 m:0 dnr:0 00:20:07.340 [2024-05-16 20:27:55.479993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:197000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.340 [2024-05-16 20:27:55.479999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32594 cdw0:50889ae0 sqhd:f530 p:0 m:0 dnr:0 00:20:07.340 [2024-05-16 20:27:55.480007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:197008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.340 [2024-05-16 20:27:55.480013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32594 cdw0:50889ae0 sqhd:f530 p:0 m:0 dnr:0 00:20:07.340 [2024-05-16 20:27:55.480021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:197016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.340 [2024-05-16 20:27:55.480028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32594 cdw0:50889ae0 sqhd:f530 p:0 m:0 dnr:0 00:20:07.340 [2024-05-16 20:27:55.480036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:197024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.340 [2024-05-16 20:27:55.480043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32594 cdw0:50889ae0 sqhd:f530 p:0 m:0 dnr:0 00:20:07.340 [2024-05-16 20:27:55.480051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:197032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.340 [2024-05-16 20:27:55.480057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32594 cdw0:50889ae0 sqhd:f530 p:0 m:0 dnr:0 00:20:07.340 [2024-05-16 20:27:55.480065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:197040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.340 [2024-05-16 20:27:55.480071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32594 cdw0:50889ae0 sqhd:f530 p:0 m:0 dnr:0 00:20:07.340 [2024-05-16 20:27:55.480079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:197048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.340 [2024-05-16 20:27:55.480085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32594 cdw0:50889ae0 sqhd:f530 p:0 m:0 dnr:0 00:20:07.340 [2024-05-16 20:27:55.480093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:197056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.340 [2024-05-16 20:27:55.480099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32594 cdw0:50889ae0 sqhd:f530 p:0 m:0 dnr:0 00:20:07.340 [2024-05-16 20:27:55.480106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:197064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.340 [2024-05-16 20:27:55.480112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32594 cdw0:50889ae0 sqhd:f530 p:0 m:0 dnr:0 00:20:07.340 [2024-05-16 20:27:55.480120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:197072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.340 [2024-05-16 20:27:55.480127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32594 cdw0:50889ae0 sqhd:f530 p:0 m:0 dnr:0 00:20:07.340 [2024-05-16 20:27:55.480134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:197080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.340 [2024-05-16 20:27:55.480141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32594 cdw0:50889ae0 sqhd:f530 p:0 m:0 dnr:0 00:20:07.340 [2024-05-16 20:27:55.480149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:197088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.341 [2024-05-16 20:27:55.480156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32594 cdw0:50889ae0 sqhd:f530 p:0 m:0 dnr:0 00:20:07.341 [2024-05-16 20:27:55.492909] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:07.341 [2024-05-16 20:27:55.492923] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:07.341 [2024-05-16 20:27:55.492929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:197096 len:8 PRP1 0x0 PRP2 0x0 00:20:07.341 [2024-05-16 20:27:55.492936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.341 [2024-05-16 20:27:55.495569] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] resetting controller 00:20:07.341 [2024-05-16 20:27:55.495849] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ADDR_RESOLVED but received RDMA_CM_EVENT_ADDR_ERROR (1) from CM event channel (status = -19) 00:20:07.341 [2024-05-16 20:27:55.495862] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:20:07.341 [2024-05-16 20:27:55.495871] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed040 00:20:07.341 [2024-05-16 20:27:55.495887] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:20:07.341 [2024-05-16 20:27:55.495894] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] in failed state. 00:20:07.341 [2024-05-16 20:27:55.495907] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] Ctrlr is in error state 00:20:07.341 [2024-05-16 20:27:55.495913] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] controller reinitialization failed 00:20:07.341 [2024-05-16 20:27:55.495920] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] already in failed state 00:20:07.341 [2024-05-16 20:27:55.495938] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:07.341 [2024-05-16 20:27:55.495944] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] resetting controller 00:20:07.341 [2024-05-16 20:27:56.500843] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ADDR_RESOLVED but received RDMA_CM_EVENT_ADDR_ERROR (1) from CM event channel (status = -19) 00:20:07.341 [2024-05-16 20:27:56.500876] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:20:07.341 [2024-05-16 20:27:56.500883] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed040 00:20:07.341 [2024-05-16 20:27:56.500899] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:20:07.341 [2024-05-16 20:27:56.500906] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] in failed state. 00:20:07.341 [2024-05-16 20:27:56.500922] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] Ctrlr is in error state 00:20:07.341 [2024-05-16 20:27:56.500929] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] controller reinitialization failed 00:20:07.341 [2024-05-16 20:27:56.500936] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] already in failed state 00:20:07.341 [2024-05-16 20:27:56.500956] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:07.341 [2024-05-16 20:27:56.500964] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] resetting controller 00:20:07.341 [2024-05-16 20:27:57.503774] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ADDR_RESOLVED but received RDMA_CM_EVENT_ADDR_ERROR (1) from CM event channel (status = -19) 00:20:07.341 [2024-05-16 20:27:57.503805] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:20:07.341 [2024-05-16 20:27:57.503812] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed040 00:20:07.341 [2024-05-16 20:27:57.503829] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:20:07.341 [2024-05-16 20:27:57.503836] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] in failed state. 00:20:07.341 [2024-05-16 20:27:57.503846] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] Ctrlr is in error state 00:20:07.341 [2024-05-16 20:27:57.503852] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] controller reinitialization failed 00:20:07.341 [2024-05-16 20:27:57.503860] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] already in failed state 00:20:07.341 [2024-05-16 20:27:57.503879] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:07.341 [2024-05-16 20:27:57.503886] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] resetting controller 00:20:07.341 [2024-05-16 20:27:59.508846] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:20:07.341 [2024-05-16 20:27:59.508885] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed040 00:20:07.341 [2024-05-16 20:27:59.508906] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:20:07.341 [2024-05-16 20:27:59.508914] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] in failed state. 00:20:07.341 [2024-05-16 20:27:59.508924] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] Ctrlr is in error state 00:20:07.341 [2024-05-16 20:27:59.508931] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] controller reinitialization failed 00:20:07.341 [2024-05-16 20:27:59.508938] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] already in failed state 00:20:07.341 [2024-05-16 20:27:59.508958] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:07.341 [2024-05-16 20:27:59.508966] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] resetting controller 00:20:07.341 [2024-05-16 20:28:01.514413] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:20:07.341 [2024-05-16 20:28:01.514442] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed040 00:20:07.341 [2024-05-16 20:28:01.514463] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:20:07.341 [2024-05-16 20:28:01.514471] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] in failed state. 00:20:07.341 [2024-05-16 20:28:01.514481] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] Ctrlr is in error state 00:20:07.341 [2024-05-16 20:28:01.514488] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] controller reinitialization failed 00:20:07.341 [2024-05-16 20:28:01.514495] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] already in failed state 00:20:07.341 [2024-05-16 20:28:01.514515] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:07.341 [2024-05-16 20:28:01.514523] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] resetting controller 00:20:07.341 [2024-05-16 20:28:03.519475] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:20:07.341 [2024-05-16 20:28:03.519499] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed040 00:20:07.341 [2024-05-16 20:28:03.519517] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:20:07.341 [2024-05-16 20:28:03.519525] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] in failed state. 00:20:07.341 [2024-05-16 20:28:03.519536] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] Ctrlr is in error state 00:20:07.341 [2024-05-16 20:28:03.519542] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] controller reinitialization failed 00:20:07.341 [2024-05-16 20:28:03.519549] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] already in failed state 00:20:07.341 [2024-05-16 20:28:03.519568] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:07.341 [2024-05-16 20:28:03.519575] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] resetting controller 00:20:07.341 [2024-05-16 20:28:04.583996] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:20:07.341 [2024-05-16 20:28:04.644818] rdma_verbs.c: 113:spdk_rdma_qp_disconnect: *ERROR*: rdma_disconnect failed, errno Invalid argument (22) 00:20:07.341 [2024-05-16 20:28:04.644839] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:07.341 [2024-05-16 20:28:04.644848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32594 cdw0:16 sqhd:53b9 p:0 m:0 dnr:0 00:20:07.341 [2024-05-16 20:28:04.644859] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:07.341 [2024-05-16 20:28:04.644866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32594 cdw0:16 sqhd:53b9 p:0 m:0 dnr:0 00:20:07.341 [2024-05-16 20:28:04.644873] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:07.341 [2024-05-16 20:28:04.644879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32594 cdw0:16 sqhd:53b9 p:0 m:0 dnr:0 00:20:07.341 [2024-05-16 20:28:04.644886] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:20:07.341 [2024-05-16 20:28:04.644892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32594 cdw0:16 sqhd:53b9 p:0 m:0 dnr:0 00:20:07.341 [2024-05-16 20:28:04.646682] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:20:07.341 [2024-05-16 20:28:04.646693] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] in failed state. 00:20:07.341 [2024-05-16 20:28:04.646716] rdma_verbs.c: 113:spdk_rdma_qp_disconnect: *ERROR*: rdma_disconnect failed, errno Invalid argument (22) 00:20:07.341 [2024-05-16 20:28:04.654830] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:07.341 [2024-05-16 20:28:04.664855] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:07.341 [2024-05-16 20:28:04.674883] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:07.341 [2024-05-16 20:28:04.684908] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:07.341 [2024-05-16 20:28:04.694934] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:07.341 [2024-05-16 20:28:04.704961] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:07.341 [2024-05-16 20:28:04.714986] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:07.342 [2024-05-16 20:28:04.725011] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:07.342 [2024-05-16 20:28:04.735037] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:07.342 [2024-05-16 20:28:04.745064] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:07.342 [2024-05-16 20:28:04.755090] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:07.342 [2024-05-16 20:28:04.765115] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:07.342 [2024-05-16 20:28:04.775140] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:07.342 [2024-05-16 20:28:04.785166] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:07.342 [2024-05-16 20:28:04.795192] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:07.342 [2024-05-16 20:28:04.805218] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:07.342 [2024-05-16 20:28:04.815244] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:07.342 [2024-05-16 20:28:04.825271] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:07.342 [2024-05-16 20:28:04.835297] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:07.342 [2024-05-16 20:28:04.845325] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:07.342 [2024-05-16 20:28:04.855350] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:07.342 [2024-05-16 20:28:04.865375] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:07.342 [2024-05-16 20:28:04.875402] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:07.342 [2024-05-16 20:28:04.885429] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:07.342 [2024-05-16 20:28:04.895457] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:07.342 [2024-05-16 20:28:04.905482] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:07.342 [2024-05-16 20:28:04.915513] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:07.342 [2024-05-16 20:28:04.925540] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:07.342 [2024-05-16 20:28:04.935570] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:07.342 [2024-05-16 20:28:04.945593] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:07.342 [2024-05-16 20:28:04.955619] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:07.342 [2024-05-16 20:28:04.965647] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:07.342 [2024-05-16 20:28:04.975672] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:07.342 [2024-05-16 20:28:04.985700] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:07.342 [2024-05-16 20:28:04.995725] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:07.342 [2024-05-16 20:28:05.005751] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:07.342 [2024-05-16 20:28:05.015778] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:07.342 [2024-05-16 20:28:05.025805] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:07.342 [2024-05-16 20:28:05.035832] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:07.342 [2024-05-16 20:28:05.045857] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:07.342 [2024-05-16 20:28:05.055883] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:07.342 [2024-05-16 20:28:05.065909] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:07.342 [2024-05-16 20:28:05.075937] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:07.342 [2024-05-16 20:28:05.085963] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:07.342 [2024-05-16 20:28:05.095991] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:07.342 [2024-05-16 20:28:05.106018] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:07.342 [2024-05-16 20:28:05.116046] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:07.342 [2024-05-16 20:28:05.126071] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:07.342 [2024-05-16 20:28:05.136097] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:07.342 [2024-05-16 20:28:05.146123] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:07.342 [2024-05-16 20:28:05.156151] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:07.342 [2024-05-16 20:28:05.166178] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:07.342 [2024-05-16 20:28:05.176204] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:07.342 [2024-05-16 20:28:05.186232] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:07.342 [2024-05-16 20:28:05.196258] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:07.342 [2024-05-16 20:28:05.206283] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:07.342 [2024-05-16 20:28:05.216310] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:07.342 [2024-05-16 20:28:05.226337] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:07.342 [2024-05-16 20:28:05.236365] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:07.342 [2024-05-16 20:28:05.246391] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:07.342 [2024-05-16 20:28:05.256417] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:07.342 [2024-05-16 20:28:05.266443] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:07.342 [2024-05-16 20:28:05.276468] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:07.342 [2024-05-16 20:28:05.286496] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:07.342 [2024-05-16 20:28:05.296520] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:07.342 [2024-05-16 20:28:05.306547] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:07.342 [2024-05-16 20:28:05.316574] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:07.342 [2024-05-16 20:28:05.326601] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:07.342 [2024-05-16 20:28:05.336628] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:07.342 [2024-05-16 20:28:05.346655] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:07.342 [2024-05-16 20:28:05.356680] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:07.342 [2024-05-16 20:28:05.366708] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:07.342 [2024-05-16 20:28:05.376734] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:07.342 [2024-05-16 20:28:05.386760] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:07.342 [2024-05-16 20:28:05.396788] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:07.342 [2024-05-16 20:28:05.406813] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:07.342 [2024-05-16 20:28:05.416840] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:07.342 [2024-05-16 20:28:05.426867] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:07.342 [2024-05-16 20:28:05.436892] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:07.342 [2024-05-16 20:28:05.446917] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:07.342 [2024-05-16 20:28:05.456945] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:07.342 [2024-05-16 20:28:05.466971] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:07.342 [2024-05-16 20:28:05.476995] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:07.342 [2024-05-16 20:28:05.487022] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:07.342 [2024-05-16 20:28:05.497047] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:07.342 [2024-05-16 20:28:05.507074] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:07.342 [2024-05-16 20:28:05.517101] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:07.342 [2024-05-16 20:28:05.527629] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:07.343 [2024-05-16 20:28:05.538325] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:07.343 [2024-05-16 20:28:05.548351] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:07.343 [2024-05-16 20:28:05.558377] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:07.343 [2024-05-16 20:28:05.569009] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:07.343 [2024-05-16 20:28:05.579709] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:07.343 [2024-05-16 20:28:05.589736] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:07.343 [2024-05-16 20:28:05.600049] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:07.343 [2024-05-16 20:28:05.610993] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:07.343 [2024-05-16 20:28:05.621607] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:07.343 [2024-05-16 20:28:05.631632] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:07.343 [2024-05-16 20:28:05.641658] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:07.343 [2024-05-16 20:28:05.649321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:16168 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007934000 len:0x1000 key:0x1bef00 00:20:07.343 [2024-05-16 20:28:05.649340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32594 cdw0:50889ae0 sqhd:f530 p:0 m:0 dnr:0 00:20:07.343 [2024-05-16 20:28:05.649353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16176 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007932000 len:0x1000 key:0x1bef00 00:20:07.343 [2024-05-16 20:28:05.649360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32594 cdw0:50889ae0 sqhd:f530 p:0 m:0 dnr:0 00:20:07.343 [2024-05-16 20:28:05.649368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16184 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007930000 len:0x1000 key:0x1bef00 00:20:07.343 [2024-05-16 20:28:05.649375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32594 cdw0:50889ae0 sqhd:f530 p:0 m:0 dnr:0 00:20:07.343 [2024-05-16 20:28:05.649383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:16192 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000792e000 len:0x1000 key:0x1bef00 00:20:07.343 [2024-05-16 20:28:05.649393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32594 cdw0:50889ae0 sqhd:f530 p:0 m:0 dnr:0 00:20:07.343 [2024-05-16 20:28:05.649401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:16200 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000792c000 len:0x1000 key:0x1bef00 00:20:07.343 [2024-05-16 20:28:05.649408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32594 cdw0:50889ae0 sqhd:f530 p:0 m:0 dnr:0 00:20:07.343 [2024-05-16 20:28:05.649416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:16208 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000792a000 len:0x1000 key:0x1bef00 00:20:07.343 [2024-05-16 20:28:05.649429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32594 cdw0:50889ae0 sqhd:f530 p:0 m:0 dnr:0 00:20:07.343 [2024-05-16 20:28:05.649437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:16216 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007928000 len:0x1000 key:0x1bef00 00:20:07.343 [2024-05-16 20:28:05.649444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32594 cdw0:50889ae0 sqhd:f530 p:0 m:0 dnr:0 00:20:07.343 [2024-05-16 20:28:05.649452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:16224 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007926000 len:0x1000 key:0x1bef00 00:20:07.343 [2024-05-16 20:28:05.649458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32594 cdw0:50889ae0 sqhd:f530 p:0 m:0 dnr:0 00:20:07.343 [2024-05-16 20:28:05.649466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16232 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007924000 len:0x1000 key:0x1bef00 00:20:07.343 [2024-05-16 20:28:05.649473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32594 cdw0:50889ae0 sqhd:f530 p:0 m:0 dnr:0 00:20:07.343 [2024-05-16 20:28:05.649480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:16240 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007922000 len:0x1000 key:0x1bef00 00:20:07.343 [2024-05-16 20:28:05.649488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32594 cdw0:50889ae0 sqhd:f530 p:0 m:0 dnr:0 00:20:07.343 [2024-05-16 20:28:05.649496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16248 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007920000 len:0x1000 key:0x1bef00 00:20:07.343 [2024-05-16 20:28:05.649502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32594 cdw0:50889ae0 sqhd:f530 p:0 m:0 dnr:0 00:20:07.343 [2024-05-16 20:28:05.649510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:16256 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000791e000 len:0x1000 key:0x1bef00 00:20:07.343 [2024-05-16 20:28:05.649516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32594 cdw0:50889ae0 sqhd:f530 p:0 m:0 dnr:0 00:20:07.343 [2024-05-16 20:28:05.649524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:16264 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000791c000 len:0x1000 key:0x1bef00 00:20:07.343 [2024-05-16 20:28:05.649531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32594 cdw0:50889ae0 sqhd:f530 p:0 m:0 dnr:0 00:20:07.343 [2024-05-16 20:28:05.649539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:16272 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000791a000 len:0x1000 key:0x1bef00 00:20:07.343 [2024-05-16 20:28:05.649545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32594 cdw0:50889ae0 sqhd:f530 p:0 m:0 dnr:0 00:20:07.343 [2024-05-16 20:28:05.649553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:16280 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007918000 len:0x1000 key:0x1bef00 00:20:07.343 [2024-05-16 20:28:05.649559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32594 cdw0:50889ae0 sqhd:f530 p:0 m:0 dnr:0 00:20:07.343 [2024-05-16 20:28:05.649569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:16288 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007916000 len:0x1000 key:0x1bef00 00:20:07.343 [2024-05-16 20:28:05.649576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32594 cdw0:50889ae0 sqhd:f530 p:0 m:0 dnr:0 00:20:07.343 [2024-05-16 20:28:05.649584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:16296 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007914000 len:0x1000 key:0x1bef00 00:20:07.343 [2024-05-16 20:28:05.649590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32594 cdw0:50889ae0 sqhd:f530 p:0 m:0 dnr:0 00:20:07.343 [2024-05-16 20:28:05.649598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:16304 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007912000 len:0x1000 key:0x1bef00 00:20:07.343 [2024-05-16 20:28:05.649604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32594 cdw0:50889ae0 sqhd:f530 p:0 m:0 dnr:0 00:20:07.343 [2024-05-16 20:28:05.649612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:16312 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007910000 len:0x1000 key:0x1bef00 00:20:07.343 [2024-05-16 20:28:05.649619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32594 cdw0:50889ae0 sqhd:f530 p:0 m:0 dnr:0 00:20:07.343 [2024-05-16 20:28:05.649627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:16320 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000790e000 len:0x1000 key:0x1bef00 00:20:07.343 [2024-05-16 20:28:05.649633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32594 cdw0:50889ae0 sqhd:f530 p:0 m:0 dnr:0 00:20:07.343 [2024-05-16 20:28:05.649641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:16328 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000790c000 len:0x1000 key:0x1bef00 00:20:07.343 [2024-05-16 20:28:05.649647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32594 cdw0:50889ae0 sqhd:f530 p:0 m:0 dnr:0 00:20:07.343 [2024-05-16 20:28:05.649655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:16336 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000790a000 len:0x1000 key:0x1bef00 00:20:07.343 [2024-05-16 20:28:05.649661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32594 cdw0:50889ae0 sqhd:f530 p:0 m:0 dnr:0 00:20:07.343 [2024-05-16 20:28:05.649669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:16344 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007908000 len:0x1000 key:0x1bef00 00:20:07.343 [2024-05-16 20:28:05.649677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32594 cdw0:50889ae0 sqhd:f530 p:0 m:0 dnr:0 00:20:07.343 [2024-05-16 20:28:05.649684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:16352 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007906000 len:0x1000 key:0x1bef00 00:20:07.343 [2024-05-16 20:28:05.649691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32594 cdw0:50889ae0 sqhd:f530 p:0 m:0 dnr:0 00:20:07.343 [2024-05-16 20:28:05.649699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:16360 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007904000 len:0x1000 key:0x1bef00 00:20:07.343 [2024-05-16 20:28:05.649706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32594 cdw0:50889ae0 sqhd:f530 p:0 m:0 dnr:0 00:20:07.343 [2024-05-16 20:28:05.649714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:16368 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007902000 len:0x1000 key:0x1bef00 00:20:07.343 [2024-05-16 20:28:05.649720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32594 cdw0:50889ae0 sqhd:f530 p:0 m:0 dnr:0 00:20:07.343 [2024-05-16 20:28:05.649729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:16376 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007900000 len:0x1000 key:0x1bef00 00:20:07.343 [2024-05-16 20:28:05.649736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32594 cdw0:50889ae0 sqhd:f530 p:0 m:0 dnr:0 00:20:07.343 [2024-05-16 20:28:05.649744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:16384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.343 [2024-05-16 20:28:05.649750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32594 cdw0:50889ae0 sqhd:f530 p:0 m:0 dnr:0 00:20:07.344 [2024-05-16 20:28:05.649758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:16392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.344 [2024-05-16 20:28:05.649764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32594 cdw0:50889ae0 sqhd:f530 p:0 m:0 dnr:0 00:20:07.344 [2024-05-16 20:28:05.649772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:16400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.344 [2024-05-16 20:28:05.649778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32594 cdw0:50889ae0 sqhd:f530 p:0 m:0 dnr:0 00:20:07.344 [2024-05-16 20:28:05.649786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.344 [2024-05-16 20:28:05.649792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32594 cdw0:50889ae0 sqhd:f530 p:0 m:0 dnr:0 00:20:07.344 [2024-05-16 20:28:05.649800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:16416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.344 [2024-05-16 20:28:05.649807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32594 cdw0:50889ae0 sqhd:f530 p:0 m:0 dnr:0 00:20:07.344 [2024-05-16 20:28:05.649815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:16424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.344 [2024-05-16 20:28:05.649821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32594 cdw0:50889ae0 sqhd:f530 p:0 m:0 dnr:0 00:20:07.344 [2024-05-16 20:28:05.649828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:16432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.344 [2024-05-16 20:28:05.649834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32594 cdw0:50889ae0 sqhd:f530 p:0 m:0 dnr:0 00:20:07.344 [2024-05-16 20:28:05.649842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:16440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.344 [2024-05-16 20:28:05.649848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32594 cdw0:50889ae0 sqhd:f530 p:0 m:0 dnr:0 00:20:07.344 [2024-05-16 20:28:05.649856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:16448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.344 [2024-05-16 20:28:05.649863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32594 cdw0:50889ae0 sqhd:f530 p:0 m:0 dnr:0 00:20:07.344 [2024-05-16 20:28:05.649870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:16456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.344 [2024-05-16 20:28:05.649877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32594 cdw0:50889ae0 sqhd:f530 p:0 m:0 dnr:0 00:20:07.344 [2024-05-16 20:28:05.649884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:16464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.344 [2024-05-16 20:28:05.649890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32594 cdw0:50889ae0 sqhd:f530 p:0 m:0 dnr:0 00:20:07.344 [2024-05-16 20:28:05.649902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:16472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.344 [2024-05-16 20:28:05.649908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32594 cdw0:50889ae0 sqhd:f530 p:0 m:0 dnr:0 00:20:07.344 [2024-05-16 20:28:05.649916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:16480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.344 [2024-05-16 20:28:05.649923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32594 cdw0:50889ae0 sqhd:f530 p:0 m:0 dnr:0 00:20:07.344 [2024-05-16 20:28:05.649931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:16488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.344 [2024-05-16 20:28:05.649937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32594 cdw0:50889ae0 sqhd:f530 p:0 m:0 dnr:0 00:20:07.344 [2024-05-16 20:28:05.649945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:16496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.344 [2024-05-16 20:28:05.649951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32594 cdw0:50889ae0 sqhd:f530 p:0 m:0 dnr:0 00:20:07.344 [2024-05-16 20:28:05.649959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:16504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.344 [2024-05-16 20:28:05.649965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32594 cdw0:50889ae0 sqhd:f530 p:0 m:0 dnr:0 00:20:07.344 [2024-05-16 20:28:05.649973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:16512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.344 [2024-05-16 20:28:05.649979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32594 cdw0:50889ae0 sqhd:f530 p:0 m:0 dnr:0 00:20:07.344 [2024-05-16 20:28:05.649987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:16520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.344 [2024-05-16 20:28:05.649993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32594 cdw0:50889ae0 sqhd:f530 p:0 m:0 dnr:0 00:20:07.344 [2024-05-16 20:28:05.650000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:16528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.344 [2024-05-16 20:28:05.650006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32594 cdw0:50889ae0 sqhd:f530 p:0 m:0 dnr:0 00:20:07.344 [2024-05-16 20:28:05.650014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:16536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.344 [2024-05-16 20:28:05.650020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32594 cdw0:50889ae0 sqhd:f530 p:0 m:0 dnr:0 00:20:07.344 [2024-05-16 20:28:05.650028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:16544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.344 [2024-05-16 20:28:05.650035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32594 cdw0:50889ae0 sqhd:f530 p:0 m:0 dnr:0 00:20:07.344 [2024-05-16 20:28:05.650042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:16552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.344 [2024-05-16 20:28:05.650049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32594 cdw0:50889ae0 sqhd:f530 p:0 m:0 dnr:0 00:20:07.344 [2024-05-16 20:28:05.650056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:16560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.344 [2024-05-16 20:28:05.650063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32594 cdw0:50889ae0 sqhd:f530 p:0 m:0 dnr:0 00:20:07.344 [2024-05-16 20:28:05.650071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:16568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.344 [2024-05-16 20:28:05.650078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32594 cdw0:50889ae0 sqhd:f530 p:0 m:0 dnr:0 00:20:07.344 [2024-05-16 20:28:05.650086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:16576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.344 [2024-05-16 20:28:05.650092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32594 cdw0:50889ae0 sqhd:f530 p:0 m:0 dnr:0 00:20:07.344 [2024-05-16 20:28:05.650100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:16584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.344 [2024-05-16 20:28:05.650106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32594 cdw0:50889ae0 sqhd:f530 p:0 m:0 dnr:0 00:20:07.344 [2024-05-16 20:28:05.650113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:16592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.344 [2024-05-16 20:28:05.650120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32594 cdw0:50889ae0 sqhd:f530 p:0 m:0 dnr:0 00:20:07.344 [2024-05-16 20:28:05.650128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:16600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.344 [2024-05-16 20:28:05.650134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32594 cdw0:50889ae0 sqhd:f530 p:0 m:0 dnr:0 00:20:07.344 [2024-05-16 20:28:05.650142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:16608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.344 [2024-05-16 20:28:05.650148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32594 cdw0:50889ae0 sqhd:f530 p:0 m:0 dnr:0 00:20:07.344 [2024-05-16 20:28:05.650156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:16616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.344 [2024-05-16 20:28:05.650162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32594 cdw0:50889ae0 sqhd:f530 p:0 m:0 dnr:0 00:20:07.344 [2024-05-16 20:28:05.650169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:16624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.344 [2024-05-16 20:28:05.650176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32594 cdw0:50889ae0 sqhd:f530 p:0 m:0 dnr:0 00:20:07.344 [2024-05-16 20:28:05.650183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:16632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.344 [2024-05-16 20:28:05.650190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32594 cdw0:50889ae0 sqhd:f530 p:0 m:0 dnr:0 00:20:07.344 [2024-05-16 20:28:05.650197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:16640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.344 [2024-05-16 20:28:05.650203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32594 cdw0:50889ae0 sqhd:f530 p:0 m:0 dnr:0 00:20:07.344 [2024-05-16 20:28:05.650211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:16648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.344 [2024-05-16 20:28:05.650217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32594 cdw0:50889ae0 sqhd:f530 p:0 m:0 dnr:0 00:20:07.344 [2024-05-16 20:28:05.650225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:16656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.344 [2024-05-16 20:28:05.650231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32594 cdw0:50889ae0 sqhd:f530 p:0 m:0 dnr:0 00:20:07.344 [2024-05-16 20:28:05.650239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:16664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.344 [2024-05-16 20:28:05.650247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32594 cdw0:50889ae0 sqhd:f530 p:0 m:0 dnr:0 00:20:07.345 [2024-05-16 20:28:05.650254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:16672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.345 [2024-05-16 20:28:05.650261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32594 cdw0:50889ae0 sqhd:f530 p:0 m:0 dnr:0 00:20:07.345 [2024-05-16 20:28:05.650268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:16680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.345 [2024-05-16 20:28:05.650274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32594 cdw0:50889ae0 sqhd:f530 p:0 m:0 dnr:0 00:20:07.345 [2024-05-16 20:28:05.650282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:16688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.345 [2024-05-16 20:28:05.650288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32594 cdw0:50889ae0 sqhd:f530 p:0 m:0 dnr:0 00:20:07.345 [2024-05-16 20:28:05.650296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:16696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.345 [2024-05-16 20:28:05.650302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32594 cdw0:50889ae0 sqhd:f530 p:0 m:0 dnr:0 00:20:07.345 [2024-05-16 20:28:05.650310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:16704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.345 [2024-05-16 20:28:05.650317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32594 cdw0:50889ae0 sqhd:f530 p:0 m:0 dnr:0 00:20:07.345 [2024-05-16 20:28:05.650324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:16712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.345 [2024-05-16 20:28:05.650330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32594 cdw0:50889ae0 sqhd:f530 p:0 m:0 dnr:0 00:20:07.345 [2024-05-16 20:28:05.650338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:16720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.345 [2024-05-16 20:28:05.650345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32594 cdw0:50889ae0 sqhd:f530 p:0 m:0 dnr:0 00:20:07.345 [2024-05-16 20:28:05.650352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:16728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.345 [2024-05-16 20:28:05.650358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32594 cdw0:50889ae0 sqhd:f530 p:0 m:0 dnr:0 00:20:07.345 [2024-05-16 20:28:05.650366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:16736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.345 [2024-05-16 20:28:05.650372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32594 cdw0:50889ae0 sqhd:f530 p:0 m:0 dnr:0 00:20:07.345 [2024-05-16 20:28:05.650380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:16744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.345 [2024-05-16 20:28:05.650386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32594 cdw0:50889ae0 sqhd:f530 p:0 m:0 dnr:0 00:20:07.345 [2024-05-16 20:28:05.650394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:16752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.345 [2024-05-16 20:28:05.650400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32594 cdw0:50889ae0 sqhd:f530 p:0 m:0 dnr:0 00:20:07.345 [2024-05-16 20:28:05.650408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:16760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.345 [2024-05-16 20:28:05.650414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32594 cdw0:50889ae0 sqhd:f530 p:0 m:0 dnr:0 00:20:07.345 [2024-05-16 20:28:05.650427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:16768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.345 [2024-05-16 20:28:05.650434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32594 cdw0:50889ae0 sqhd:f530 p:0 m:0 dnr:0 00:20:07.345 [2024-05-16 20:28:05.650442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:16776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.345 [2024-05-16 20:28:05.650448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32594 cdw0:50889ae0 sqhd:f530 p:0 m:0 dnr:0 00:20:07.345 [2024-05-16 20:28:05.650456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:16784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.345 [2024-05-16 20:28:05.650462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32594 cdw0:50889ae0 sqhd:f530 p:0 m:0 dnr:0 00:20:07.345 [2024-05-16 20:28:05.650470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:16792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.345 [2024-05-16 20:28:05.650476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32594 cdw0:50889ae0 sqhd:f530 p:0 m:0 dnr:0 00:20:07.345 [2024-05-16 20:28:05.650484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:16800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.345 [2024-05-16 20:28:05.650491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32594 cdw0:50889ae0 sqhd:f530 p:0 m:0 dnr:0 00:20:07.345 [2024-05-16 20:28:05.650498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:16808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.345 [2024-05-16 20:28:05.650505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32594 cdw0:50889ae0 sqhd:f530 p:0 m:0 dnr:0 00:20:07.345 [2024-05-16 20:28:05.650513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:16816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.345 [2024-05-16 20:28:05.650519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32594 cdw0:50889ae0 sqhd:f530 p:0 m:0 dnr:0 00:20:07.345 [2024-05-16 20:28:05.650527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:16824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.345 [2024-05-16 20:28:05.650533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32594 cdw0:50889ae0 sqhd:f530 p:0 m:0 dnr:0 00:20:07.345 [2024-05-16 20:28:05.650540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:16832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.345 [2024-05-16 20:28:05.650547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32594 cdw0:50889ae0 sqhd:f530 p:0 m:0 dnr:0 00:20:07.345 [2024-05-16 20:28:05.650555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.345 [2024-05-16 20:28:05.650561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32594 cdw0:50889ae0 sqhd:f530 p:0 m:0 dnr:0 00:20:07.345 [2024-05-16 20:28:05.650568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:16848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.345 [2024-05-16 20:28:05.650574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32594 cdw0:50889ae0 sqhd:f530 p:0 m:0 dnr:0 00:20:07.345 [2024-05-16 20:28:05.650582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:16856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.345 [2024-05-16 20:28:05.650588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32594 cdw0:50889ae0 sqhd:f530 p:0 m:0 dnr:0 00:20:07.345 [2024-05-16 20:28:05.650598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:16864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.345 [2024-05-16 20:28:05.650604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32594 cdw0:50889ae0 sqhd:f530 p:0 m:0 dnr:0 00:20:07.345 [2024-05-16 20:28:05.650612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:16872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.345 [2024-05-16 20:28:05.650618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32594 cdw0:50889ae0 sqhd:f530 p:0 m:0 dnr:0 00:20:07.346 [2024-05-16 20:28:05.650626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:16880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.346 [2024-05-16 20:28:05.650632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32594 cdw0:50889ae0 sqhd:f530 p:0 m:0 dnr:0 00:20:07.346 [2024-05-16 20:28:05.650639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:16888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.346 [2024-05-16 20:28:05.650645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32594 cdw0:50889ae0 sqhd:f530 p:0 m:0 dnr:0 00:20:07.346 [2024-05-16 20:28:05.650654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:16896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.346 [2024-05-16 20:28:05.650661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32594 cdw0:50889ae0 sqhd:f530 p:0 m:0 dnr:0 00:20:07.346 [2024-05-16 20:28:05.650668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:16904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.346 [2024-05-16 20:28:05.650674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32594 cdw0:50889ae0 sqhd:f530 p:0 m:0 dnr:0 00:20:07.346 [2024-05-16 20:28:05.650682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:16912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.346 [2024-05-16 20:28:05.650688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32594 cdw0:50889ae0 sqhd:f530 p:0 m:0 dnr:0 00:20:07.346 [2024-05-16 20:28:05.650695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:16920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.346 [2024-05-16 20:28:05.650701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32594 cdw0:50889ae0 sqhd:f530 p:0 m:0 dnr:0 00:20:07.346 [2024-05-16 20:28:05.650709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:16928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.346 [2024-05-16 20:28:05.650716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32594 cdw0:50889ae0 sqhd:f530 p:0 m:0 dnr:0 00:20:07.346 [2024-05-16 20:28:05.650723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:16936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.346 [2024-05-16 20:28:05.650730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32594 cdw0:50889ae0 sqhd:f530 p:0 m:0 dnr:0 00:20:07.346 [2024-05-16 20:28:05.650737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:16944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.346 [2024-05-16 20:28:05.650743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32594 cdw0:50889ae0 sqhd:f530 p:0 m:0 dnr:0 00:20:07.346 [2024-05-16 20:28:05.650751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:16952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.346 [2024-05-16 20:28:05.650757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32594 cdw0:50889ae0 sqhd:f530 p:0 m:0 dnr:0 00:20:07.346 [2024-05-16 20:28:05.650767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:16960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.346 [2024-05-16 20:28:05.650774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32594 cdw0:50889ae0 sqhd:f530 p:0 m:0 dnr:0 00:20:07.346 [2024-05-16 20:28:05.650781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:16968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.346 [2024-05-16 20:28:05.650787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32594 cdw0:50889ae0 sqhd:f530 p:0 m:0 dnr:0 00:20:07.346 [2024-05-16 20:28:05.650795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:16976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.346 [2024-05-16 20:28:05.650801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32594 cdw0:50889ae0 sqhd:f530 p:0 m:0 dnr:0 00:20:07.346 [2024-05-16 20:28:05.650808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:16984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.346 [2024-05-16 20:28:05.650815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32594 cdw0:50889ae0 sqhd:f530 p:0 m:0 dnr:0 00:20:07.346 [2024-05-16 20:28:05.650822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:16992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.346 [2024-05-16 20:28:05.650829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32594 cdw0:50889ae0 sqhd:f530 p:0 m:0 dnr:0 00:20:07.346 [2024-05-16 20:28:05.650836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:17000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.346 [2024-05-16 20:28:05.650842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32594 cdw0:50889ae0 sqhd:f530 p:0 m:0 dnr:0 00:20:07.346 [2024-05-16 20:28:05.650850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:17008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.346 [2024-05-16 20:28:05.650856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32594 cdw0:50889ae0 sqhd:f530 p:0 m:0 dnr:0 00:20:07.346 [2024-05-16 20:28:05.650864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:17016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.346 [2024-05-16 20:28:05.650870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32594 cdw0:50889ae0 sqhd:f530 p:0 m:0 dnr:0 00:20:07.346 [2024-05-16 20:28:05.650878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:17024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.346 [2024-05-16 20:28:05.650884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32594 cdw0:50889ae0 sqhd:f530 p:0 m:0 dnr:0 00:20:07.346 [2024-05-16 20:28:05.650892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:17032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.346 [2024-05-16 20:28:05.650898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32594 cdw0:50889ae0 sqhd:f530 p:0 m:0 dnr:0 00:20:07.346 [2024-05-16 20:28:05.650905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:17040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.346 [2024-05-16 20:28:05.650912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32594 cdw0:50889ae0 sqhd:f530 p:0 m:0 dnr:0 00:20:07.346 [2024-05-16 20:28:05.650919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:17048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.346 [2024-05-16 20:28:05.650926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32594 cdw0:50889ae0 sqhd:f530 p:0 m:0 dnr:0 00:20:07.346 [2024-05-16 20:28:05.650933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:17056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.346 [2024-05-16 20:28:05.650941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32594 cdw0:50889ae0 sqhd:f530 p:0 m:0 dnr:0 00:20:07.346 [2024-05-16 20:28:05.650949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.346 [2024-05-16 20:28:05.650955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32594 cdw0:50889ae0 sqhd:f530 p:0 m:0 dnr:0 00:20:07.346 [2024-05-16 20:28:05.650962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:17072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.346 [2024-05-16 20:28:05.650969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32594 cdw0:50889ae0 sqhd:f530 p:0 m:0 dnr:0 00:20:07.346 [2024-05-16 20:28:05.650976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:17080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.346 [2024-05-16 20:28:05.650982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32594 cdw0:50889ae0 sqhd:f530 p:0 m:0 dnr:0 00:20:07.346 [2024-05-16 20:28:05.650990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:17088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.346 [2024-05-16 20:28:05.650996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32594 cdw0:50889ae0 sqhd:f530 p:0 m:0 dnr:0 00:20:07.346 [2024-05-16 20:28:05.651004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:17096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.346 [2024-05-16 20:28:05.651010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32594 cdw0:50889ae0 sqhd:f530 p:0 m:0 dnr:0 00:20:07.346 [2024-05-16 20:28:05.651017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:17104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.346 [2024-05-16 20:28:05.651024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32594 cdw0:50889ae0 sqhd:f530 p:0 m:0 dnr:0 00:20:07.346 [2024-05-16 20:28:05.651031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:17112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.346 [2024-05-16 20:28:05.651037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32594 cdw0:50889ae0 sqhd:f530 p:0 m:0 dnr:0 00:20:07.346 [2024-05-16 20:28:05.651045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:17120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.346 [2024-05-16 20:28:05.651053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32594 cdw0:50889ae0 sqhd:f530 p:0 m:0 dnr:0 00:20:07.346 [2024-05-16 20:28:05.651061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:17128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.346 [2024-05-16 20:28:05.651067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32594 cdw0:50889ae0 sqhd:f530 p:0 m:0 dnr:0 00:20:07.346 [2024-05-16 20:28:05.651074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:17136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.346 [2024-05-16 20:28:05.651081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32594 cdw0:50889ae0 sqhd:f530 p:0 m:0 dnr:0 00:20:07.346 [2024-05-16 20:28:05.651089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:17144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.346 [2024-05-16 20:28:05.651095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32594 cdw0:50889ae0 sqhd:f530 p:0 m:0 dnr:0 00:20:07.346 [2024-05-16 20:28:05.651103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:17152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.347 [2024-05-16 20:28:05.651110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32594 cdw0:50889ae0 sqhd:f530 p:0 m:0 dnr:0 00:20:07.347 [2024-05-16 20:28:05.651118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:17160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.347 [2024-05-16 20:28:05.651125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32594 cdw0:50889ae0 sqhd:f530 p:0 m:0 dnr:0 00:20:07.347 [2024-05-16 20:28:05.651133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:17168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.347 [2024-05-16 20:28:05.651140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32594 cdw0:50889ae0 sqhd:f530 p:0 m:0 dnr:0 00:20:07.347 [2024-05-16 20:28:05.651148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:17176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.347 [2024-05-16 20:28:05.651154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32594 cdw0:50889ae0 sqhd:f530 p:0 m:0 dnr:0 00:20:07.347 [2024-05-16 20:28:05.663891] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:07.347 [2024-05-16 20:28:05.663904] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:07.347 [2024-05-16 20:28:05.663910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17184 len:8 PRP1 0x0 PRP2 0x0 00:20:07.347 [2024-05-16 20:28:05.663917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.347 [2024-05-16 20:28:05.663957] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] resetting controller 00:20:07.347 [2024-05-16 20:28:05.665944] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ADDR_RESOLVED but received RDMA_CM_EVENT_ADDR_ERROR (1) from CM event channel (status = -19) 00:20:07.347 [2024-05-16 20:28:05.665961] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:20:07.347 [2024-05-16 20:28:05.665966] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192e4280 00:20:07.347 [2024-05-16 20:28:05.665980] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:20:07.347 [2024-05-16 20:28:05.665987] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] in failed state. 00:20:07.347 [2024-05-16 20:28:05.666011] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] Ctrlr is in error state 00:20:07.347 [2024-05-16 20:28:05.666017] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] controller reinitialization failed 00:20:07.347 [2024-05-16 20:28:05.666025] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] already in failed state 00:20:07.347 [2024-05-16 20:28:05.666070] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:07.347 [2024-05-16 20:28:05.666078] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] resetting controller 00:20:07.347 [2024-05-16 20:28:06.670720] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ADDR_RESOLVED but received RDMA_CM_EVENT_ADDR_ERROR (1) from CM event channel (status = -19) 00:20:07.347 [2024-05-16 20:28:06.670757] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:20:07.347 [2024-05-16 20:28:06.670763] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192e4280 00:20:07.347 [2024-05-16 20:28:06.670780] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:20:07.347 [2024-05-16 20:28:06.670789] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] in failed state. 00:20:07.347 [2024-05-16 20:28:06.670815] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] Ctrlr is in error state 00:20:07.347 [2024-05-16 20:28:06.670827] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] controller reinitialization failed 00:20:07.347 [2024-05-16 20:28:06.670834] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] already in failed state 00:20:07.347 [2024-05-16 20:28:06.670854] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:07.347 [2024-05-16 20:28:06.670862] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] resetting controller 00:20:07.347 [2024-05-16 20:28:07.673951] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ADDR_RESOLVED but received RDMA_CM_EVENT_ADDR_ERROR (1) from CM event channel (status = -19) 00:20:07.347 [2024-05-16 20:28:07.673986] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:20:07.347 [2024-05-16 20:28:07.673993] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192e4280 00:20:07.347 [2024-05-16 20:28:07.674009] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:20:07.347 [2024-05-16 20:28:07.674017] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] in failed state. 00:20:07.347 [2024-05-16 20:28:07.674036] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] Ctrlr is in error state 00:20:07.347 [2024-05-16 20:28:07.674043] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] controller reinitialization failed 00:20:07.347 [2024-05-16 20:28:07.674050] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] already in failed state 00:20:07.347 [2024-05-16 20:28:07.674071] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:07.347 [2024-05-16 20:28:07.674079] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] resetting controller 00:20:07.347 [2024-05-16 20:28:09.680400] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:20:07.347 [2024-05-16 20:28:09.680445] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192e4280 00:20:07.347 [2024-05-16 20:28:09.680467] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:20:07.347 [2024-05-16 20:28:09.680475] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] in failed state. 00:20:07.347 [2024-05-16 20:28:09.681612] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] Ctrlr is in error state 00:20:07.347 [2024-05-16 20:28:09.681627] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] controller reinitialization failed 00:20:07.347 [2024-05-16 20:28:09.681635] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] already in failed state 00:20:07.347 [2024-05-16 20:28:09.681657] bdev_nvme.c:2884:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Reset is already in progress. Defer failover until reset completes. 00:20:07.347 [2024-05-16 20:28:09.681675] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:07.347 [2024-05-16 20:28:09.681709] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] resetting controller 00:20:07.347 [2024-05-16 20:28:10.686197] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:20:07.347 [2024-05-16 20:28:10.686226] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192e4280 00:20:07.347 [2024-05-16 20:28:10.686247] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:20:07.347 [2024-05-16 20:28:10.686256] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] in failed state. 00:20:07.347 [2024-05-16 20:28:10.686267] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] Ctrlr is in error state 00:20:07.347 [2024-05-16 20:28:10.686275] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] controller reinitialization failed 00:20:07.347 [2024-05-16 20:28:10.686287] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] already in failed state 00:20:07.347 [2024-05-16 20:28:10.686314] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:07.347 [2024-05-16 20:28:10.686323] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] resetting controller 00:20:07.347 [2024-05-16 20:28:12.691273] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:20:07.347 [2024-05-16 20:28:12.691301] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192e4280 00:20:07.347 [2024-05-16 20:28:12.691324] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:20:07.347 [2024-05-16 20:28:12.691332] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] in failed state. 00:20:07.347 [2024-05-16 20:28:12.691343] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] Ctrlr is in error state 00:20:07.347 [2024-05-16 20:28:12.691350] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] controller reinitialization failed 00:20:07.347 [2024-05-16 20:28:12.691358] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] already in failed state 00:20:07.347 [2024-05-16 20:28:12.691378] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:07.347 [2024-05-16 20:28:12.691385] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] resetting controller 00:20:07.347 [2024-05-16 20:28:13.946047] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:20:07.347 00:20:07.347 Latency(us) 00:20:07.347 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:07.347 Job: Nvme_mlx_0_0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:07.347 Verification LBA range: start 0x0 length 0x8000 00:20:07.347 Nvme_mlx_0_0n1 : 90.01 10793.33 42.16 0.00 0.00 11837.02 2153.33 11056984.26 00:20:07.347 Job: Nvme_mlx_0_1n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:07.347 Verification LBA range: start 0x0 length 0x8000 00:20:07.348 Nvme_mlx_0_1n1 : 90.01 9777.61 38.19 0.00 0.00 13073.53 2356.18 10098286.20 00:20:07.348 =================================================================================================================== 00:20:07.348 Total : 20570.94 80.36 0.00 0.00 12424.76 2153.33 11056984.26 00:20:07.348 Received shutdown signal, test time was about 90.000000 seconds 00:20:07.348 00:20:07.348 Latency(us) 00:20:07.348 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:07.348 =================================================================================================================== 00:20:07.348 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:07.348 20:29:19 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@123 -- # trap - SIGINT SIGTERM EXIT 00:20:07.348 20:29:19 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@124 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/try.txt 00:20:07.348 20:29:19 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@202 -- # killprocess 3085016 00:20:07.348 20:29:19 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@946 -- # '[' -z 3085016 ']' 00:20:07.348 20:29:19 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@950 -- # kill -0 3085016 00:20:07.348 20:29:19 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@951 -- # uname 00:20:07.348 20:29:19 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:07.348 20:29:19 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3085016 00:20:07.348 20:29:19 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:20:07.348 20:29:19 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:20:07.348 20:29:19 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3085016' 00:20:07.348 killing process with pid 3085016 00:20:07.348 20:29:19 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@965 -- # kill 3085016 00:20:07.348 [2024-05-16 20:29:19.983068] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:20:07.348 20:29:19 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@970 -- # wait 3085016 00:20:07.348 [2024-05-16 20:29:20.011884] rdma.c:2885:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:20:07.348 20:29:20 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@203 -- # nvmfpid= 00:20:07.348 20:29:20 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@205 -- # return 0 00:20:07.348 00:20:07.348 real 1m33.171s 00:20:07.348 user 4m28.156s 00:20:07.348 sys 0m3.926s 00:20:07.348 20:29:20 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@1122 -- # xtrace_disable 00:20:07.348 20:29:20 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@10 -- # set +x 00:20:07.348 ************************************ 00:20:07.348 END TEST nvmf_device_removal_pci_remove_no_srq 00:20:07.348 ************************************ 00:20:07.348 20:29:20 nvmf_rdma.nvmf_device_removal -- target/device_removal.sh@312 -- # run_test nvmf_device_removal_pci_remove test_remove_and_rescan 00:20:07.348 20:29:20 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:20:07.348 20:29:20 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@1103 -- # xtrace_disable 00:20:07.348 20:29:20 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@10 -- # set +x 00:20:07.608 ************************************ 00:20:07.608 START TEST nvmf_device_removal_pci_remove 00:20:07.608 ************************************ 00:20:07.608 20:29:20 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@1121 -- # test_remove_and_rescan 00:20:07.608 20:29:20 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@128 -- # nvmfappstart -m 0x3 00:20:07.608 20:29:20 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:07.608 20:29:20 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@720 -- # xtrace_disable 00:20:07.608 20:29:20 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@10 -- # set +x 00:20:07.608 20:29:20 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@481 -- # nvmfpid=3100292 00:20:07.608 20:29:20 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@482 -- # waitforlisten 3100292 00:20:07.608 20:29:20 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@827 -- # '[' -z 3100292 ']' 00:20:07.608 20:29:20 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:07.608 20:29:20 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:07.608 20:29:20 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:07.608 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:07.608 20:29:20 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:07.608 20:29:20 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:20:07.608 20:29:20 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@10 -- # set +x 00:20:07.608 [2024-05-16 20:29:20.396740] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:20:07.608 [2024-05-16 20:29:20.396784] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:07.608 EAL: No free 2048 kB hugepages reported on node 1 00:20:07.608 [2024-05-16 20:29:20.458147] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:20:07.608 [2024-05-16 20:29:20.537524] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:07.608 [2024-05-16 20:29:20.537558] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:07.608 [2024-05-16 20:29:20.537565] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:07.608 [2024-05-16 20:29:20.537571] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:07.608 [2024-05-16 20:29:20.537576] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:07.608 [2024-05-16 20:29:20.537624] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:07.608 [2024-05-16 20:29:20.537625] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:08.545 20:29:21 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:08.545 20:29:21 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@860 -- # return 0 00:20:08.545 20:29:21 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:08.545 20:29:21 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:08.545 20:29:21 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@10 -- # set +x 00:20:08.545 20:29:21 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:08.545 20:29:21 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@130 -- # create_subsystem_and_connect 00:20:08.545 20:29:21 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@45 -- # local -gA netdev_nvme_dict 00:20:08.545 20:29:21 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@46 -- # netdev_nvme_dict=() 00:20:08.545 20:29:21 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@48 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:20:08.545 20:29:21 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:08.545 20:29:21 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@10 -- # set +x 00:20:08.545 [2024-05-16 20:29:21.259667] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1b322f0/0x1b367e0) succeed. 00:20:08.545 [2024-05-16 20:29:21.268350] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1b337f0/0x1b77e70) succeed. 00:20:08.545 20:29:21 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:08.545 20:29:21 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@49 -- # get_rdma_if_list 00:20:08.545 20:29:21 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:20:08.545 20:29:21 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:20:08.545 20:29:21 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:20:08.545 20:29:21 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:20:08.545 20:29:21 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:20:08.546 20:29:21 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:20:08.546 20:29:21 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:08.546 20:29:21 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:20:08.546 20:29:21 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@104 -- # echo mlx_0_0 00:20:08.546 20:29:21 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@105 -- # continue 2 00:20:08.546 20:29:21 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:20:08.546 20:29:21 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:08.546 20:29:21 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:20:08.546 20:29:21 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:08.546 20:29:21 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:20:08.546 20:29:21 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@104 -- # echo mlx_0_1 00:20:08.546 20:29:21 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@105 -- # continue 2 00:20:08.546 20:29:21 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@49 -- # for net_dev in $(get_rdma_if_list) 00:20:08.546 20:29:21 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@50 -- # create_subsystem_and_connect_on_netdev mlx_0_0 00:20:08.546 20:29:21 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@25 -- # local -a dev_name 00:20:08.546 20:29:21 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@27 -- # dev_name=mlx_0_0 00:20:08.546 20:29:21 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@28 -- # malloc_name=mlx_0_0 00:20:08.546 20:29:21 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@29 -- # get_subsystem_nqn mlx_0_0 00:20:08.546 20:29:21 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@21 -- # echo nqn.2016-06.io.spdk:system_mlx_0_0 00:20:08.546 20:29:21 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@29 -- # nqn=nqn.2016-06.io.spdk:system_mlx_0_0 00:20:08.546 20:29:21 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@30 -- # get_ip_address mlx_0_0 00:20:08.546 20:29:21 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:20:08.546 20:29:21 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:20:08.546 20:29:21 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@113 -- # awk '{print $4}' 00:20:08.546 20:29:21 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@113 -- # cut -d/ -f1 00:20:08.546 20:29:21 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@30 -- # ip=192.168.100.8 00:20:08.546 20:29:21 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@31 -- # serial=SPDK000mlx_0_0 00:20:08.546 20:29:21 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@33 -- # MALLOC_BDEV_SIZE=128 00:20:08.546 20:29:21 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@34 -- # MALLOC_BLOCK_SIZE=512 00:20:08.546 20:29:21 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@36 -- # rpc_cmd bdev_malloc_create 128 512 -b mlx_0_0 00:20:08.546 20:29:21 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:08.546 20:29:21 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@10 -- # set +x 00:20:08.546 20:29:21 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:08.546 20:29:21 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:system_mlx_0_0 -a -s SPDK000mlx_0_0 00:20:08.546 20:29:21 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:08.546 20:29:21 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@10 -- # set +x 00:20:08.546 20:29:21 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:08.546 20:29:21 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:system_mlx_0_0 mlx_0_0 00:20:08.546 20:29:21 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:08.546 20:29:21 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@10 -- # set +x 00:20:08.546 20:29:21 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:08.546 20:29:21 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:system_mlx_0_0 -t rdma -a 192.168.100.8 -s 4420 00:20:08.546 20:29:21 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:08.546 20:29:21 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@10 -- # set +x 00:20:08.546 [2024-05-16 20:29:21.454092] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:20:08.546 [2024-05-16 20:29:21.454475] rdma.c:3032:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:20:08.546 20:29:21 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:08.546 20:29:21 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@41 -- # return 0 00:20:08.546 20:29:21 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@50 -- # netdev_nvme_dict[$net_dev]=mlx_0_0 00:20:08.546 20:29:21 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@49 -- # for net_dev in $(get_rdma_if_list) 00:20:08.546 20:29:21 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@50 -- # create_subsystem_and_connect_on_netdev mlx_0_1 00:20:08.546 20:29:21 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@25 -- # local -a dev_name 00:20:08.546 20:29:21 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@27 -- # dev_name=mlx_0_1 00:20:08.546 20:29:21 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@28 -- # malloc_name=mlx_0_1 00:20:08.546 20:29:21 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@29 -- # get_subsystem_nqn mlx_0_1 00:20:08.546 20:29:21 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@21 -- # echo nqn.2016-06.io.spdk:system_mlx_0_1 00:20:08.546 20:29:21 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@29 -- # nqn=nqn.2016-06.io.spdk:system_mlx_0_1 00:20:08.546 20:29:21 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@30 -- # get_ip_address mlx_0_1 00:20:08.546 20:29:21 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:20:08.546 20:29:21 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:20:08.546 20:29:21 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@113 -- # awk '{print $4}' 00:20:08.546 20:29:21 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@113 -- # cut -d/ -f1 00:20:08.546 20:29:21 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@30 -- # ip=192.168.100.9 00:20:08.546 20:29:21 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@31 -- # serial=SPDK000mlx_0_1 00:20:08.546 20:29:21 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@33 -- # MALLOC_BDEV_SIZE=128 00:20:08.546 20:29:21 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@34 -- # MALLOC_BLOCK_SIZE=512 00:20:08.546 20:29:21 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@36 -- # rpc_cmd bdev_malloc_create 128 512 -b mlx_0_1 00:20:08.546 20:29:21 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:08.546 20:29:21 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@10 -- # set +x 00:20:08.546 20:29:21 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:08.546 20:29:21 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:system_mlx_0_1 -a -s SPDK000mlx_0_1 00:20:08.546 20:29:21 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:08.546 20:29:21 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@10 -- # set +x 00:20:08.546 20:29:21 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:08.546 20:29:21 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:system_mlx_0_1 mlx_0_1 00:20:08.546 20:29:21 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:08.546 20:29:21 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@10 -- # set +x 00:20:08.546 20:29:21 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:08.546 20:29:21 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:system_mlx_0_1 -t rdma -a 192.168.100.9 -s 4420 00:20:08.546 20:29:21 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:08.546 20:29:21 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@10 -- # set +x 00:20:08.546 [2024-05-16 20:29:21.528952] rdma.c:3032:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.9 port 4420 *** 00:20:08.546 20:29:21 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:08.546 20:29:21 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@41 -- # return 0 00:20:08.546 20:29:21 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@50 -- # netdev_nvme_dict[$net_dev]=mlx_0_1 00:20:08.546 20:29:21 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@53 -- # return 0 00:20:08.546 20:29:21 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@132 -- # generate_io_traffic_with_bdevperf mlx_0_0 mlx_0_1 00:20:08.546 20:29:21 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@87 -- # dev_names=('mlx_0_0' 'mlx_0_1') 00:20:08.546 20:29:21 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@87 -- # local dev_names 00:20:08.546 20:29:21 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@89 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:20:08.546 20:29:21 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@91 -- # bdevperf_pid=3100494 00:20:08.806 20:29:21 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@93 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; kill -9 $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:08.806 20:29:21 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@94 -- # waitforlisten 3100494 /var/tmp/bdevperf.sock 00:20:08.806 20:29:21 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@827 -- # '[' -z 3100494 ']' 00:20:08.806 20:29:21 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:08.806 20:29:21 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:08.806 20:29:21 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:08.806 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:08.806 20:29:21 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:08.806 20:29:21 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@10 -- # set +x 00:20:08.806 20:29:21 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@90 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:20:09.743 20:29:22 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:09.743 20:29:22 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@860 -- # return 0 00:20:09.743 20:29:22 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:20:09.743 20:29:22 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:09.743 20:29:22 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@10 -- # set +x 00:20:09.743 20:29:22 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:09.743 20:29:22 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@100 -- # for dev_name in "${dev_names[@]}" 00:20:09.743 20:29:22 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@101 -- # get_subsystem_nqn mlx_0_0 00:20:09.743 20:29:22 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@21 -- # echo nqn.2016-06.io.spdk:system_mlx_0_0 00:20:09.743 20:29:22 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@101 -- # nqn=nqn.2016-06.io.spdk:system_mlx_0_0 00:20:09.743 20:29:22 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@102 -- # get_ip_address mlx_0_0 00:20:09.743 20:29:22 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:20:09.743 20:29:22 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:20:09.743 20:29:22 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@113 -- # awk '{print $4}' 00:20:09.743 20:29:22 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@113 -- # cut -d/ -f1 00:20:09.743 20:29:22 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@102 -- # tgt_ip=192.168.100.8 00:20:09.743 20:29:22 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme_mlx_0_0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:system_mlx_0_0 -l -1 -o 1 00:20:09.744 20:29:22 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:09.744 20:29:22 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@10 -- # set +x 00:20:09.744 Nvme_mlx_0_0n1 00:20:09.744 20:29:22 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:09.744 20:29:22 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@100 -- # for dev_name in "${dev_names[@]}" 00:20:09.744 20:29:22 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@101 -- # get_subsystem_nqn mlx_0_1 00:20:09.744 20:29:22 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@21 -- # echo nqn.2016-06.io.spdk:system_mlx_0_1 00:20:09.744 20:29:22 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@101 -- # nqn=nqn.2016-06.io.spdk:system_mlx_0_1 00:20:09.744 20:29:22 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@102 -- # get_ip_address mlx_0_1 00:20:09.744 20:29:22 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:20:09.744 20:29:22 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:20:09.744 20:29:22 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@113 -- # awk '{print $4}' 00:20:09.744 20:29:22 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@113 -- # cut -d/ -f1 00:20:09.744 20:29:22 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@102 -- # tgt_ip=192.168.100.9 00:20:09.744 20:29:22 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme_mlx_0_1 -t rdma -a 192.168.100.9 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:system_mlx_0_1 -l -1 -o 1 00:20:09.744 20:29:22 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:09.744 20:29:22 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@10 -- # set +x 00:20:09.744 Nvme_mlx_0_1n1 00:20:09.744 20:29:22 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:09.744 20:29:22 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@110 -- # bdevperf_rpc_pid=3100734 00:20:09.744 20:29:22 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@112 -- # sleep 5 00:20:09.744 20:29:22 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@109 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:20:15.017 20:29:27 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@134 -- # for net_dev in "${!netdev_nvme_dict[@]}" 00:20:15.017 20:29:27 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@135 -- # nvme_dev=mlx_0_0 00:20:15.017 20:29:27 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@136 -- # get_rdma_device_name mlx_0_0 00:20:15.017 20:29:27 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@71 -- # dev_name=mlx_0_0 00:20:15.017 20:29:27 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@72 -- # get_pci_dir mlx_0_0 00:20:15.017 20:29:27 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@61 -- # dev_name=mlx_0_0 00:20:15.017 20:29:27 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@62 -- # readlink -f /sys/bus/pci/devices/0000:da:00.0/net/mlx_0_0/device 00:20:15.017 20:29:27 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@72 -- # ls /sys/devices/pci0000:d7/0000:d7:02.0/0000:da:00.0/infiniband 00:20:15.017 20:29:27 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@136 -- # rdma_dev_name=mlx5_0 00:20:15.017 20:29:27 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@137 -- # get_ip_address mlx_0_0 00:20:15.017 20:29:27 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:20:15.017 20:29:27 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:20:15.017 20:29:27 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@113 -- # awk '{print $4}' 00:20:15.017 20:29:27 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@113 -- # cut -d/ -f1 00:20:15.017 20:29:27 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@137 -- # origin_ip=192.168.100.8 00:20:15.017 20:29:27 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@138 -- # get_pci_dir mlx_0_0 00:20:15.017 20:29:27 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@61 -- # dev_name=mlx_0_0 00:20:15.017 20:29:27 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@62 -- # readlink -f /sys/bus/pci/devices/0000:da:00.0/net/mlx_0_0/device 00:20:15.017 20:29:27 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@138 -- # pci_dir=/sys/devices/pci0000:d7/0000:d7:02.0/0000:da:00.0 00:20:15.017 20:29:27 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@140 -- # check_rdma_dev_exists_in_nvmf_tgt mlx5_0 00:20:15.017 20:29:27 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@76 -- # local rdma_dev_name=mlx5_0 00:20:15.017 20:29:27 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@77 -- # rpc_cmd nvmf_get_stats 00:20:15.017 20:29:27 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@77 -- # jq -r '.poll_groups[0].transports[].devices[].name' 00:20:15.017 20:29:27 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@77 -- # grep mlx5_0 00:20:15.017 20:29:27 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:15.017 20:29:27 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@10 -- # set +x 00:20:15.017 20:29:27 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:15.017 mlx5_0 00:20:15.017 20:29:27 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@78 -- # return 0 00:20:15.017 20:29:27 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@145 -- # remove_one_nic mlx_0_0 00:20:15.017 20:29:27 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@66 -- # dev_name=mlx_0_0 00:20:15.017 20:29:27 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@67 -- # echo 1 00:20:15.017 20:29:27 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@67 -- # get_pci_dir mlx_0_0 00:20:15.017 20:29:27 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@61 -- # dev_name=mlx_0_0 00:20:15.017 20:29:27 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@62 -- # readlink -f /sys/bus/pci/devices/0000:da:00.0/net/mlx_0_0/device 00:20:15.017 [2024-05-16 20:29:27.747464] rdma.c:3577:nvmf_rdma_handle_device_removal: *NOTICE*: Port 192.168.100.8:4420 on device mlx5_0 is being removed. 00:20:15.017 [2024-05-16 20:29:27.747536] rdma_verbs.c: 113:spdk_rdma_qp_disconnect: *ERROR*: rdma_disconnect failed, errno Invalid argument (22) 00:20:15.017 [2024-05-16 20:29:27.747589] rdma_verbs.c: 113:spdk_rdma_qp_disconnect: *ERROR*: rdma_disconnect failed, errno Invalid argument (22) 00:20:15.017 [2024-05-16 20:29:27.747600] rdma.c: 859:nvmf_rdma_qpair_destroy: *WARNING*: Destroying qpair when queue depth is 127 00:20:21.595 20:29:33 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@147 -- # seq 1 10 00:20:21.595 20:29:33 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@147 -- # for i in $(seq 1 10) 00:20:21.595 20:29:33 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@148 -- # check_rdma_dev_exists_in_nvmf_tgt mlx5_0 00:20:21.595 20:29:33 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@76 -- # local rdma_dev_name=mlx5_0 00:20:21.595 20:29:33 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@77 -- # rpc_cmd nvmf_get_stats 00:20:21.595 20:29:33 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@77 -- # jq -r '.poll_groups[0].transports[].devices[].name' 00:20:21.595 20:29:33 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:21.595 20:29:33 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@77 -- # grep mlx5_0 00:20:21.595 20:29:33 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@10 -- # set +x 00:20:21.595 20:29:33 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:21.595 20:29:33 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@78 -- # return 1 00:20:21.595 20:29:33 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@149 -- # break 00:20:21.595 20:29:33 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@158 -- # get_rdma_dev_count_in_nvmf_tgt 00:20:21.595 20:29:33 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@82 -- # local rdma_dev_name= 00:20:21.595 20:29:33 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@83 -- # jq -r '.poll_groups[0].transports[].devices | length' 00:20:21.595 20:29:33 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@83 -- # rpc_cmd nvmf_get_stats 00:20:21.595 20:29:33 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:21.595 20:29:33 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@10 -- # set +x 00:20:21.595 20:29:33 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:21.595 20:29:33 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@158 -- # ib_count_after_remove=1 00:20:21.595 20:29:33 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@160 -- # rescan_pci 00:20:21.595 20:29:33 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@57 -- # echo 1 00:20:21.595 [2024-05-16 20:29:34.396695] rdma.c:3266:nvmf_rdma_rescan_devices: *WARNING*: Failed to init ibv device 0x1b32970, err 11. Skip rescan. 00:20:21.595 20:29:34 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@162 -- # seq 1 10 00:20:21.595 20:29:34 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@162 -- # for i in $(seq 1 10) 00:20:21.595 20:29:34 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@163 -- # ls /sys/devices/pci0000:d7/0000:d7:02.0/0000:da:00.0/net 00:20:21.595 20:29:34 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@163 -- # new_net_dev=mlx_0_0 00:20:21.595 20:29:34 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@164 -- # [[ -z mlx_0_0 ]] 00:20:21.595 20:29:34 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@166 -- # [[ mlx_0_0 != \m\l\x\_\0\_\0 ]] 00:20:21.595 20:29:34 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@171 -- # break 00:20:21.595 20:29:34 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@175 -- # [[ -z mlx_0_0 ]] 00:20:21.595 20:29:34 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@179 -- # ip link set mlx_0_0 up 00:20:21.855 [2024-05-16 20:29:34.777154] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1bfedb0/0x1b367e0) succeed. 00:20:21.855 [2024-05-16 20:29:34.777210] rdma.c:3319:nvmf_rdma_retry_listen_port: *ERROR*: Found new IB device but port 192.168.100.8:4420 is still failed(-1) to listen. 00:20:25.171 20:29:37 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@180 -- # get_ip_address mlx_0_0 00:20:25.171 20:29:37 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:20:25.171 20:29:37 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:20:25.171 20:29:37 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@113 -- # awk '{print $4}' 00:20:25.171 20:29:37 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@113 -- # cut -d/ -f1 00:20:25.171 20:29:37 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@180 -- # [[ -z '' ]] 00:20:25.171 20:29:37 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@181 -- # ip addr add 192.168.100.8/24 dev mlx_0_0 00:20:25.171 20:29:37 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@186 -- # seq 1 10 00:20:25.171 20:29:37 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@186 -- # for i in $(seq 1 10) 00:20:25.171 20:29:37 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@187 -- # get_rdma_dev_count_in_nvmf_tgt 00:20:25.171 20:29:37 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@82 -- # local rdma_dev_name= 00:20:25.171 20:29:37 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@83 -- # rpc_cmd nvmf_get_stats 00:20:25.171 20:29:37 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@83 -- # jq -r '.poll_groups[0].transports[].devices | length' 00:20:25.171 20:29:37 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:25.171 20:29:37 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@10 -- # set +x 00:20:25.171 [2024-05-16 20:29:37.797351] rdma.c:3032:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:20:25.172 [2024-05-16 20:29:37.797387] rdma.c:3325:nvmf_rdma_retry_listen_port: *NOTICE*: Port 192.168.100.8:4420 come back 00:20:25.172 [2024-05-16 20:29:37.797403] rdma.c:3855:nvmf_process_ib_event: *NOTICE*: Async event: GID table change 00:20:25.172 [2024-05-16 20:29:37.797417] rdma.c:3855:nvmf_process_ib_event: *NOTICE*: Async event: GID table change 00:20:25.172 20:29:37 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:25.172 20:29:37 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@187 -- # ib_count=2 00:20:25.172 20:29:37 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@188 -- # (( ib_count > ib_count_after_remove )) 00:20:25.172 20:29:37 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@189 -- # break 00:20:25.172 20:29:37 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@134 -- # for net_dev in "${!netdev_nvme_dict[@]}" 00:20:25.172 20:29:37 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@135 -- # nvme_dev=mlx_0_1 00:20:25.172 20:29:37 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@136 -- # get_rdma_device_name mlx_0_1 00:20:25.172 20:29:37 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@71 -- # dev_name=mlx_0_1 00:20:25.172 20:29:37 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@72 -- # get_pci_dir mlx_0_1 00:20:25.172 20:29:37 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@61 -- # dev_name=mlx_0_1 00:20:25.172 20:29:37 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@62 -- # readlink -f /sys/bus/pci/devices/0000:da:00.1/net/mlx_0_1/device 00:20:25.172 20:29:37 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@72 -- # ls /sys/devices/pci0000:d7/0000:d7:02.0/0000:da:00.1/infiniband 00:20:25.172 20:29:37 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@136 -- # rdma_dev_name=mlx5_1 00:20:25.172 20:29:37 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@137 -- # get_ip_address mlx_0_1 00:20:25.172 20:29:37 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:20:25.172 20:29:37 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:20:25.172 20:29:37 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@113 -- # awk '{print $4}' 00:20:25.172 20:29:37 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@113 -- # cut -d/ -f1 00:20:25.172 20:29:37 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@137 -- # origin_ip=192.168.100.9 00:20:25.172 20:29:37 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@138 -- # get_pci_dir mlx_0_1 00:20:25.172 20:29:37 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@61 -- # dev_name=mlx_0_1 00:20:25.172 20:29:37 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@62 -- # readlink -f /sys/bus/pci/devices/0000:da:00.1/net/mlx_0_1/device 00:20:25.172 20:29:37 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@138 -- # pci_dir=/sys/devices/pci0000:d7/0000:d7:02.0/0000:da:00.1 00:20:25.172 20:29:37 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@140 -- # check_rdma_dev_exists_in_nvmf_tgt mlx5_1 00:20:25.172 20:29:37 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@76 -- # local rdma_dev_name=mlx5_1 00:20:25.172 20:29:37 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@77 -- # rpc_cmd nvmf_get_stats 00:20:25.172 20:29:37 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@77 -- # jq -r '.poll_groups[0].transports[].devices[].name' 00:20:25.172 20:29:37 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:25.172 20:29:37 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@77 -- # grep mlx5_1 00:20:25.172 20:29:37 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@10 -- # set +x 00:20:25.172 20:29:37 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:25.172 mlx5_1 00:20:25.172 20:29:37 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@78 -- # return 0 00:20:25.172 20:29:37 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@145 -- # remove_one_nic mlx_0_1 00:20:25.172 20:29:37 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@66 -- # dev_name=mlx_0_1 00:20:25.172 20:29:37 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@67 -- # echo 1 00:20:25.172 20:29:37 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@67 -- # get_pci_dir mlx_0_1 00:20:25.172 20:29:37 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@61 -- # dev_name=mlx_0_1 00:20:25.172 20:29:37 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@62 -- # readlink -f /sys/bus/pci/devices/0000:da:00.1/net/mlx_0_1/device 00:20:25.172 [2024-05-16 20:29:37.951469] rdma.c:3577:nvmf_rdma_handle_device_removal: *NOTICE*: Port 192.168.100.9:4420 on device mlx5_1 is being removed. 00:20:25.172 [2024-05-16 20:29:37.951535] rdma_verbs.c: 113:spdk_rdma_qp_disconnect: *ERROR*: rdma_disconnect failed, errno Invalid argument (22) 00:20:25.172 [2024-05-16 20:29:37.962906] rdma_verbs.c: 113:spdk_rdma_qp_disconnect: *ERROR*: rdma_disconnect failed, errno Invalid argument (22) 00:20:25.172 [2024-05-16 20:29:37.962924] rdma.c: 859:nvmf_rdma_qpair_destroy: *WARNING*: Destroying qpair when queue depth is 95 00:20:31.739 20:29:43 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@147 -- # seq 1 10 00:20:31.739 20:29:43 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@147 -- # for i in $(seq 1 10) 00:20:31.739 20:29:43 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@148 -- # check_rdma_dev_exists_in_nvmf_tgt mlx5_1 00:20:31.739 20:29:43 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@76 -- # local rdma_dev_name=mlx5_1 00:20:31.739 20:29:43 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@77 -- # rpc_cmd nvmf_get_stats 00:20:31.739 20:29:43 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@77 -- # jq -r '.poll_groups[0].transports[].devices[].name' 00:20:31.739 20:29:43 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@77 -- # grep mlx5_1 00:20:31.739 20:29:43 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:31.739 20:29:43 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@10 -- # set +x 00:20:31.739 20:29:43 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:31.739 20:29:43 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@78 -- # return 1 00:20:31.739 20:29:43 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@149 -- # break 00:20:31.739 20:29:43 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@158 -- # get_rdma_dev_count_in_nvmf_tgt 00:20:31.739 20:29:43 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@82 -- # local rdma_dev_name= 00:20:31.739 20:29:43 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@83 -- # rpc_cmd nvmf_get_stats 00:20:31.739 20:29:43 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@83 -- # jq -r '.poll_groups[0].transports[].devices | length' 00:20:31.739 20:29:43 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:31.739 20:29:43 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@10 -- # set +x 00:20:31.739 20:29:43 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:31.739 20:29:43 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@158 -- # ib_count_after_remove=1 00:20:31.739 20:29:43 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@160 -- # rescan_pci 00:20:31.739 20:29:43 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@57 -- # echo 1 00:20:32.676 20:29:45 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@162 -- # seq 1 10 00:20:32.676 20:29:45 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@162 -- # for i in $(seq 1 10) 00:20:32.676 20:29:45 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@163 -- # ls /sys/devices/pci0000:d7/0000:d7:02.0/0000:da:00.1/net 00:20:32.676 20:29:45 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@163 -- # new_net_dev=mlx_0_1 00:20:32.676 20:29:45 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@164 -- # [[ -z mlx_0_1 ]] 00:20:32.676 20:29:45 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@166 -- # [[ mlx_0_1 != \m\l\x\_\0\_\1 ]] 00:20:32.676 20:29:45 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@171 -- # break 00:20:32.676 20:29:45 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@175 -- # [[ -z mlx_0_1 ]] 00:20:32.676 20:29:45 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@179 -- # ip link set mlx_0_1 up 00:20:32.676 20:29:45 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@180 -- # get_ip_address mlx_0_1 00:20:32.676 20:29:45 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:20:32.676 20:29:45 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:20:32.676 20:29:45 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@113 -- # cut -d/ -f1 00:20:32.676 20:29:45 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@113 -- # awk '{print $4}' 00:20:32.676 20:29:45 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@180 -- # [[ -z '' ]] 00:20:32.676 20:29:45 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@181 -- # ip addr add 192.168.100.9/24 dev mlx_0_1 00:20:32.676 20:29:45 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@186 -- # seq 1 10 00:20:32.676 20:29:45 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@186 -- # for i in $(seq 1 10) 00:20:32.676 20:29:45 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@187 -- # get_rdma_dev_count_in_nvmf_tgt 00:20:32.676 20:29:45 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@82 -- # local rdma_dev_name= 00:20:32.676 20:29:45 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@83 -- # rpc_cmd nvmf_get_stats 00:20:32.676 20:29:45 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@83 -- # jq -r '.poll_groups[0].transports[].devices | length' 00:20:32.676 20:29:45 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:32.676 20:29:45 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@10 -- # set +x 00:20:32.676 [2024-05-16 20:29:45.668469] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1b35600/0x1b77e70) succeed. 00:20:32.936 [2024-05-16 20:29:45.672915] rdma.c:3032:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.9 port 4420 *** 00:20:32.936 [2024-05-16 20:29:45.672933] rdma.c:3325:nvmf_rdma_retry_listen_port: *NOTICE*: Port 192.168.100.9:4420 come back 00:20:32.936 20:29:45 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:32.936 20:29:45 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@187 -- # ib_count=2 00:20:32.936 20:29:45 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@188 -- # (( ib_count > ib_count_after_remove )) 00:20:32.936 20:29:45 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@189 -- # break 00:20:32.936 20:29:45 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@200 -- # stop_bdevperf 00:20:32.936 20:29:45 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@116 -- # wait 3100734 00:21:40.637 0 00:21:40.637 20:30:52 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@118 -- # killprocess 3100494 00:21:40.637 20:30:52 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@946 -- # '[' -z 3100494 ']' 00:21:40.637 20:30:52 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@950 -- # kill -0 3100494 00:21:40.637 20:30:52 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@951 -- # uname 00:21:40.637 20:30:52 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:21:40.637 20:30:52 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3100494 00:21:40.637 20:30:52 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:21:40.637 20:30:52 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:21:40.637 20:30:52 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3100494' 00:21:40.637 killing process with pid 3100494 00:21:40.637 20:30:52 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@965 -- # kill 3100494 00:21:40.637 20:30:52 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@970 -- # wait 3100494 00:21:40.637 20:30:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@119 -- # bdevperf_pid= 00:21:40.637 20:30:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@121 -- # cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/try.txt 00:21:40.637 [2024-05-16 20:29:21.578381] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:21:40.637 [2024-05-16 20:29:21.578434] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3100494 ] 00:21:40.637 EAL: No free 2048 kB hugepages reported on node 1 00:21:40.637 [2024-05-16 20:29:21.632797] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:40.637 [2024-05-16 20:29:21.712685] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:40.637 Running I/O for 90 seconds... 00:21:40.637 [2024-05-16 20:29:27.744384] rdma_verbs.c: 113:spdk_rdma_qp_disconnect: *ERROR*: rdma_disconnect failed, errno Invalid argument (22) 00:21:40.637 [2024-05-16 20:29:27.744427] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:40.637 [2024-05-16 20:29:27.744438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32698 cdw0:16 sqhd:83b9 p:0 m:0 dnr:0 00:21:40.637 [2024-05-16 20:29:27.744447] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:40.637 [2024-05-16 20:29:27.744453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32698 cdw0:16 sqhd:83b9 p:0 m:0 dnr:0 00:21:40.637 [2024-05-16 20:29:27.744460] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:40.637 [2024-05-16 20:29:27.744467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32698 cdw0:16 sqhd:83b9 p:0 m:0 dnr:0 00:21:40.637 [2024-05-16 20:29:27.744473] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:21:40.637 [2024-05-16 20:29:27.744479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32698 cdw0:16 sqhd:83b9 p:0 m:0 dnr:0 00:21:40.637 [2024-05-16 20:29:27.746336] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:21:40.637 [2024-05-16 20:29:27.746349] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] in failed state. 00:21:40.637 [2024-05-16 20:29:27.746370] rdma_verbs.c: 113:spdk_rdma_qp_disconnect: *ERROR*: rdma_disconnect failed, errno Invalid argument (22) 00:21:40.637 [2024-05-16 20:29:27.754379] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:40.637 [2024-05-16 20:29:27.764403] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:40.637 [2024-05-16 20:29:27.774432] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:40.637 [2024-05-16 20:29:27.784459] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:40.637 [2024-05-16 20:29:27.794484] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:40.637 [2024-05-16 20:29:27.804513] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:40.637 [2024-05-16 20:29:27.814537] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:40.637 [2024-05-16 20:29:27.824562] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:40.637 [2024-05-16 20:29:27.834588] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:40.637 [2024-05-16 20:29:27.844634] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:40.637 [2024-05-16 20:29:27.854660] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:40.637 [2024-05-16 20:29:27.864685] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:40.637 [2024-05-16 20:29:27.874711] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:40.637 [2024-05-16 20:29:27.884738] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:40.637 [2024-05-16 20:29:27.894855] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:40.637 [2024-05-16 20:29:27.905021] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:40.637 [2024-05-16 20:29:27.915047] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:40.637 [2024-05-16 20:29:27.925360] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:40.637 [2024-05-16 20:29:27.935384] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:40.637 [2024-05-16 20:29:27.945768] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:40.637 [2024-05-16 20:29:27.956136] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:40.637 [2024-05-16 20:29:27.966791] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:40.638 [2024-05-16 20:29:27.977314] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:40.638 [2024-05-16 20:29:27.987750] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:40.638 [2024-05-16 20:29:27.998213] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:40.638 [2024-05-16 20:29:28.008828] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:40.638 [2024-05-16 20:29:28.019220] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:40.638 [2024-05-16 20:29:28.029652] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:40.638 [2024-05-16 20:29:28.040209] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:40.638 [2024-05-16 20:29:28.050752] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:40.638 [2024-05-16 20:29:28.061003] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:40.638 [2024-05-16 20:29:28.071378] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:40.638 [2024-05-16 20:29:28.081402] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:40.638 [2024-05-16 20:29:28.091487] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:40.638 [2024-05-16 20:29:28.101859] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:40.638 [2024-05-16 20:29:28.112218] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:40.638 [2024-05-16 20:29:28.122726] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:40.638 [2024-05-16 20:29:28.133137] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:40.638 [2024-05-16 20:29:28.143743] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:40.638 [2024-05-16 20:29:28.154123] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:40.638 [2024-05-16 20:29:28.164148] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:40.638 [2024-05-16 20:29:28.174439] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:40.638 [2024-05-16 20:29:28.184662] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:40.638 [2024-05-16 20:29:28.194697] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:40.638 [2024-05-16 20:29:28.204841] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:40.638 [2024-05-16 20:29:28.215264] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:40.638 [2024-05-16 20:29:28.225609] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:40.638 [2024-05-16 20:29:28.236094] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:40.638 [2024-05-16 20:29:28.246541] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:40.638 [2024-05-16 20:29:28.257017] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:40.638 [2024-05-16 20:29:28.267475] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:40.638 [2024-05-16 20:29:28.278011] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:40.638 [2024-05-16 20:29:28.288354] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:40.638 [2024-05-16 20:29:28.298749] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:40.638 [2024-05-16 20:29:28.308768] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:40.638 [2024-05-16 20:29:28.318796] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:40.638 [2024-05-16 20:29:28.328823] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:40.638 [2024-05-16 20:29:28.339134] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:40.638 [2024-05-16 20:29:28.349158] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:40.638 [2024-05-16 20:29:28.359185] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:40.638 [2024-05-16 20:29:28.369401] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:40.638 [2024-05-16 20:29:28.379737] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:40.638 [2024-05-16 20:29:28.390053] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:40.638 [2024-05-16 20:29:28.400080] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:40.638 [2024-05-16 20:29:28.410106] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:40.638 [2024-05-16 20:29:28.420219] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:40.638 [2024-05-16 20:29:28.430248] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:40.638 [2024-05-16 20:29:28.440275] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:40.638 [2024-05-16 20:29:28.450431] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:40.638 [2024-05-16 20:29:28.460728] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:40.638 [2024-05-16 20:29:28.470840] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:40.638 [2024-05-16 20:29:28.481190] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:40.638 [2024-05-16 20:29:28.491804] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:40.638 [2024-05-16 20:29:28.502102] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:40.638 [2024-05-16 20:29:28.512558] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:40.638 [2024-05-16 20:29:28.522998] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:40.638 [2024-05-16 20:29:28.533404] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:40.638 [2024-05-16 20:29:28.543930] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:40.638 [2024-05-16 20:29:28.554272] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:40.638 [2024-05-16 20:29:28.564720] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:40.638 [2024-05-16 20:29:28.575116] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:40.638 [2024-05-16 20:29:28.585430] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:40.638 [2024-05-16 20:29:28.596214] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:40.638 [2024-05-16 20:29:28.606702] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:40.638 [2024-05-16 20:29:28.617072] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:40.638 [2024-05-16 20:29:28.627484] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:40.638 [2024-05-16 20:29:28.637866] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:40.638 [2024-05-16 20:29:28.648367] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:40.638 [2024-05-16 20:29:28.658770] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:40.638 [2024-05-16 20:29:28.669117] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:40.638 [2024-05-16 20:29:28.679463] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:40.638 [2024-05-16 20:29:28.689858] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:40.638 [2024-05-16 20:29:28.700287] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:40.638 [2024-05-16 20:29:28.710806] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:40.638 [2024-05-16 20:29:28.721175] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:40.638 [2024-05-16 20:29:28.731552] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:40.638 [2024-05-16 20:29:28.741789] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:40.638 [2024-05-16 20:29:28.749371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:187896 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007780000 len:0x1000 key:0x1810ef 00:21:40.638 [2024-05-16 20:29:28.749389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32698 cdw0:a541b670 sqhd:2530 p:0 m:0 dnr:0 00:21:40.638 [2024-05-16 20:29:28.749405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:187904 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000777e000 len:0x1000 key:0x1810ef 00:21:40.638 [2024-05-16 20:29:28.749416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32698 cdw0:a541b670 sqhd:2530 p:0 m:0 dnr:0 00:21:40.638 [2024-05-16 20:29:28.749431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:187912 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000777c000 len:0x1000 key:0x1810ef 00:21:40.638 [2024-05-16 20:29:28.749438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32698 cdw0:a541b670 sqhd:2530 p:0 m:0 dnr:0 00:21:40.638 [2024-05-16 20:29:28.749446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:187920 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000777a000 len:0x1000 key:0x1810ef 00:21:40.638 [2024-05-16 20:29:28.749469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32698 cdw0:a541b670 sqhd:2530 p:0 m:0 dnr:0 00:21:40.638 [2024-05-16 20:29:28.749478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:187928 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007778000 len:0x1000 key:0x1810ef 00:21:40.638 [2024-05-16 20:29:28.749484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32698 cdw0:a541b670 sqhd:2530 p:0 m:0 dnr:0 00:21:40.638 [2024-05-16 20:29:28.749493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:187936 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007776000 len:0x1000 key:0x1810ef 00:21:40.638 [2024-05-16 20:29:28.749500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32698 cdw0:a541b670 sqhd:2530 p:0 m:0 dnr:0 00:21:40.638 [2024-05-16 20:29:28.749508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:187944 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007774000 len:0x1000 key:0x1810ef 00:21:40.638 [2024-05-16 20:29:28.749515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32698 cdw0:a541b670 sqhd:2530 p:0 m:0 dnr:0 00:21:40.638 [2024-05-16 20:29:28.749523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:187952 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007772000 len:0x1000 key:0x1810ef 00:21:40.639 [2024-05-16 20:29:28.749531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32698 cdw0:a541b670 sqhd:2530 p:0 m:0 dnr:0 00:21:40.639 [2024-05-16 20:29:28.749540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:187960 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007770000 len:0x1000 key:0x1810ef 00:21:40.639 [2024-05-16 20:29:28.749546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32698 cdw0:a541b670 sqhd:2530 p:0 m:0 dnr:0 00:21:40.639 [2024-05-16 20:29:28.749554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:187968 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000776e000 len:0x1000 key:0x1810ef 00:21:40.639 [2024-05-16 20:29:28.749561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32698 cdw0:a541b670 sqhd:2530 p:0 m:0 dnr:0 00:21:40.639 [2024-05-16 20:29:28.749570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:187976 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000776c000 len:0x1000 key:0x1810ef 00:21:40.639 [2024-05-16 20:29:28.749577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32698 cdw0:a541b670 sqhd:2530 p:0 m:0 dnr:0 00:21:40.639 [2024-05-16 20:29:28.749586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:187984 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000776a000 len:0x1000 key:0x1810ef 00:21:40.639 [2024-05-16 20:29:28.749592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32698 cdw0:a541b670 sqhd:2530 p:0 m:0 dnr:0 00:21:40.639 [2024-05-16 20:29:28.749600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:187992 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007768000 len:0x1000 key:0x1810ef 00:21:40.639 [2024-05-16 20:29:28.749608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32698 cdw0:a541b670 sqhd:2530 p:0 m:0 dnr:0 00:21:40.639 [2024-05-16 20:29:28.749619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:188000 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007766000 len:0x1000 key:0x1810ef 00:21:40.639 [2024-05-16 20:29:28.749626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32698 cdw0:a541b670 sqhd:2530 p:0 m:0 dnr:0 00:21:40.639 [2024-05-16 20:29:28.749634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:188008 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007764000 len:0x1000 key:0x1810ef 00:21:40.639 [2024-05-16 20:29:28.749641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32698 cdw0:a541b670 sqhd:2530 p:0 m:0 dnr:0 00:21:40.639 [2024-05-16 20:29:28.749649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:188016 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007762000 len:0x1000 key:0x1810ef 00:21:40.639 [2024-05-16 20:29:28.749655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32698 cdw0:a541b670 sqhd:2530 p:0 m:0 dnr:0 00:21:40.639 [2024-05-16 20:29:28.749663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:188024 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007760000 len:0x1000 key:0x1810ef 00:21:40.639 [2024-05-16 20:29:28.749672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32698 cdw0:a541b670 sqhd:2530 p:0 m:0 dnr:0 00:21:40.639 [2024-05-16 20:29:28.749681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:188032 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000775e000 len:0x1000 key:0x1810ef 00:21:40.639 [2024-05-16 20:29:28.749689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32698 cdw0:a541b670 sqhd:2530 p:0 m:0 dnr:0 00:21:40.639 [2024-05-16 20:29:28.749697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:188040 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000775c000 len:0x1000 key:0x1810ef 00:21:40.639 [2024-05-16 20:29:28.749705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32698 cdw0:a541b670 sqhd:2530 p:0 m:0 dnr:0 00:21:40.639 [2024-05-16 20:29:28.749713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:188048 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000775a000 len:0x1000 key:0x1810ef 00:21:40.639 [2024-05-16 20:29:28.749719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32698 cdw0:a541b670 sqhd:2530 p:0 m:0 dnr:0 00:21:40.639 [2024-05-16 20:29:28.749730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:188056 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007758000 len:0x1000 key:0x1810ef 00:21:40.639 [2024-05-16 20:29:28.749737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32698 cdw0:a541b670 sqhd:2530 p:0 m:0 dnr:0 00:21:40.639 [2024-05-16 20:29:28.749746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:188064 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007756000 len:0x1000 key:0x1810ef 00:21:40.639 [2024-05-16 20:29:28.749753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32698 cdw0:a541b670 sqhd:2530 p:0 m:0 dnr:0 00:21:40.639 [2024-05-16 20:29:28.749762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:188072 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007754000 len:0x1000 key:0x1810ef 00:21:40.639 [2024-05-16 20:29:28.749771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32698 cdw0:a541b670 sqhd:2530 p:0 m:0 dnr:0 00:21:40.639 [2024-05-16 20:29:28.749781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:188080 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007752000 len:0x1000 key:0x1810ef 00:21:40.639 [2024-05-16 20:29:28.749788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32698 cdw0:a541b670 sqhd:2530 p:0 m:0 dnr:0 00:21:40.639 [2024-05-16 20:29:28.749798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:188088 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007750000 len:0x1000 key:0x1810ef 00:21:40.639 [2024-05-16 20:29:28.749804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32698 cdw0:a541b670 sqhd:2530 p:0 m:0 dnr:0 00:21:40.639 [2024-05-16 20:29:28.749813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:188096 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000774e000 len:0x1000 key:0x1810ef 00:21:40.639 [2024-05-16 20:29:28.749820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32698 cdw0:a541b670 sqhd:2530 p:0 m:0 dnr:0 00:21:40.639 [2024-05-16 20:29:28.749829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:188104 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000774c000 len:0x1000 key:0x1810ef 00:21:40.639 [2024-05-16 20:29:28.749837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32698 cdw0:a541b670 sqhd:2530 p:0 m:0 dnr:0 00:21:40.639 [2024-05-16 20:29:28.749846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:188112 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000774a000 len:0x1000 key:0x1810ef 00:21:40.639 [2024-05-16 20:29:28.749853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32698 cdw0:a541b670 sqhd:2530 p:0 m:0 dnr:0 00:21:40.639 [2024-05-16 20:29:28.749861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:188120 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007748000 len:0x1000 key:0x1810ef 00:21:40.639 [2024-05-16 20:29:28.749868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32698 cdw0:a541b670 sqhd:2530 p:0 m:0 dnr:0 00:21:40.639 [2024-05-16 20:29:28.749876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:188128 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007746000 len:0x1000 key:0x1810ef 00:21:40.639 [2024-05-16 20:29:28.749882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32698 cdw0:a541b670 sqhd:2530 p:0 m:0 dnr:0 00:21:40.639 [2024-05-16 20:29:28.749891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:188136 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007744000 len:0x1000 key:0x1810ef 00:21:40.639 [2024-05-16 20:29:28.749898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32698 cdw0:a541b670 sqhd:2530 p:0 m:0 dnr:0 00:21:40.639 [2024-05-16 20:29:28.749906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:188144 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007742000 len:0x1000 key:0x1810ef 00:21:40.639 [2024-05-16 20:29:28.749913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32698 cdw0:a541b670 sqhd:2530 p:0 m:0 dnr:0 00:21:40.639 [2024-05-16 20:29:28.749921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:188152 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007740000 len:0x1000 key:0x1810ef 00:21:40.639 [2024-05-16 20:29:28.749927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32698 cdw0:a541b670 sqhd:2530 p:0 m:0 dnr:0 00:21:40.639 [2024-05-16 20:29:28.749936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:188160 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000773e000 len:0x1000 key:0x1810ef 00:21:40.639 [2024-05-16 20:29:28.749943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32698 cdw0:a541b670 sqhd:2530 p:0 m:0 dnr:0 00:21:40.639 [2024-05-16 20:29:28.749951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:188168 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000773c000 len:0x1000 key:0x1810ef 00:21:40.639 [2024-05-16 20:29:28.749958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32698 cdw0:a541b670 sqhd:2530 p:0 m:0 dnr:0 00:21:40.639 [2024-05-16 20:29:28.749968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:188176 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000773a000 len:0x1000 key:0x1810ef 00:21:40.639 [2024-05-16 20:29:28.749974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32698 cdw0:a541b670 sqhd:2530 p:0 m:0 dnr:0 00:21:40.639 [2024-05-16 20:29:28.749983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:188184 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007738000 len:0x1000 key:0x1810ef 00:21:40.639 [2024-05-16 20:29:28.749989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32698 cdw0:a541b670 sqhd:2530 p:0 m:0 dnr:0 00:21:40.639 [2024-05-16 20:29:28.749997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:188192 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007736000 len:0x1000 key:0x1810ef 00:21:40.639 [2024-05-16 20:29:28.750004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32698 cdw0:a541b670 sqhd:2530 p:0 m:0 dnr:0 00:21:40.639 [2024-05-16 20:29:28.750011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:188200 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007734000 len:0x1000 key:0x1810ef 00:21:40.639 [2024-05-16 20:29:28.750018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32698 cdw0:a541b670 sqhd:2530 p:0 m:0 dnr:0 00:21:40.639 [2024-05-16 20:29:28.750026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:188208 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007732000 len:0x1000 key:0x1810ef 00:21:40.639 [2024-05-16 20:29:28.750032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32698 cdw0:a541b670 sqhd:2530 p:0 m:0 dnr:0 00:21:40.639 [2024-05-16 20:29:28.750040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:188216 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007730000 len:0x1000 key:0x1810ef 00:21:40.639 [2024-05-16 20:29:28.750046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32698 cdw0:a541b670 sqhd:2530 p:0 m:0 dnr:0 00:21:40.639 [2024-05-16 20:29:28.750054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:188224 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000772e000 len:0x1000 key:0x1810ef 00:21:40.639 [2024-05-16 20:29:28.750061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32698 cdw0:a541b670 sqhd:2530 p:0 m:0 dnr:0 00:21:40.639 [2024-05-16 20:29:28.750069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:188232 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000772c000 len:0x1000 key:0x1810ef 00:21:40.639 [2024-05-16 20:29:28.750075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32698 cdw0:a541b670 sqhd:2530 p:0 m:0 dnr:0 00:21:40.639 [2024-05-16 20:29:28.750082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:188240 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000772a000 len:0x1000 key:0x1810ef 00:21:40.639 [2024-05-16 20:29:28.750088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32698 cdw0:a541b670 sqhd:2530 p:0 m:0 dnr:0 00:21:40.640 [2024-05-16 20:29:28.750096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:188248 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007728000 len:0x1000 key:0x1810ef 00:21:40.640 [2024-05-16 20:29:28.750103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32698 cdw0:a541b670 sqhd:2530 p:0 m:0 dnr:0 00:21:40.640 [2024-05-16 20:29:28.750111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:188256 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007726000 len:0x1000 key:0x1810ef 00:21:40.640 [2024-05-16 20:29:28.750118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32698 cdw0:a541b670 sqhd:2530 p:0 m:0 dnr:0 00:21:40.640 [2024-05-16 20:29:28.750126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:188264 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007724000 len:0x1000 key:0x1810ef 00:21:40.640 [2024-05-16 20:29:28.750134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32698 cdw0:a541b670 sqhd:2530 p:0 m:0 dnr:0 00:21:40.640 [2024-05-16 20:29:28.750142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:188272 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007722000 len:0x1000 key:0x1810ef 00:21:40.640 [2024-05-16 20:29:28.750148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32698 cdw0:a541b670 sqhd:2530 p:0 m:0 dnr:0 00:21:40.640 [2024-05-16 20:29:28.750156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:188280 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007720000 len:0x1000 key:0x1810ef 00:21:40.640 [2024-05-16 20:29:28.750163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32698 cdw0:a541b670 sqhd:2530 p:0 m:0 dnr:0 00:21:40.640 [2024-05-16 20:29:28.750171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:188288 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000771e000 len:0x1000 key:0x1810ef 00:21:40.640 [2024-05-16 20:29:28.750178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32698 cdw0:a541b670 sqhd:2530 p:0 m:0 dnr:0 00:21:40.640 [2024-05-16 20:29:28.750185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:188296 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000771c000 len:0x1000 key:0x1810ef 00:21:40.640 [2024-05-16 20:29:28.750192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32698 cdw0:a541b670 sqhd:2530 p:0 m:0 dnr:0 00:21:40.640 [2024-05-16 20:29:28.750202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:188304 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000771a000 len:0x1000 key:0x1810ef 00:21:40.640 [2024-05-16 20:29:28.750209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32698 cdw0:a541b670 sqhd:2530 p:0 m:0 dnr:0 00:21:40.640 [2024-05-16 20:29:28.750217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:188312 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007718000 len:0x1000 key:0x1810ef 00:21:40.640 [2024-05-16 20:29:28.750224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32698 cdw0:a541b670 sqhd:2530 p:0 m:0 dnr:0 00:21:40.640 [2024-05-16 20:29:28.750232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:188320 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007716000 len:0x1000 key:0x1810ef 00:21:40.640 [2024-05-16 20:29:28.750238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32698 cdw0:a541b670 sqhd:2530 p:0 m:0 dnr:0 00:21:40.640 [2024-05-16 20:29:28.750245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:188328 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007714000 len:0x1000 key:0x1810ef 00:21:40.640 [2024-05-16 20:29:28.750252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32698 cdw0:a541b670 sqhd:2530 p:0 m:0 dnr:0 00:21:40.640 [2024-05-16 20:29:28.750259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:188336 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007712000 len:0x1000 key:0x1810ef 00:21:40.640 [2024-05-16 20:29:28.750267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32698 cdw0:a541b670 sqhd:2530 p:0 m:0 dnr:0 00:21:40.640 [2024-05-16 20:29:28.750275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:188344 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007710000 len:0x1000 key:0x1810ef 00:21:40.640 [2024-05-16 20:29:28.750281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32698 cdw0:a541b670 sqhd:2530 p:0 m:0 dnr:0 00:21:40.640 [2024-05-16 20:29:28.750289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:188352 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000770e000 len:0x1000 key:0x1810ef 00:21:40.640 [2024-05-16 20:29:28.750297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32698 cdw0:a541b670 sqhd:2530 p:0 m:0 dnr:0 00:21:40.640 [2024-05-16 20:29:28.750304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:188360 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000770c000 len:0x1000 key:0x1810ef 00:21:40.640 [2024-05-16 20:29:28.750311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32698 cdw0:a541b670 sqhd:2530 p:0 m:0 dnr:0 00:21:40.640 [2024-05-16 20:29:28.750318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:188368 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000770a000 len:0x1000 key:0x1810ef 00:21:40.640 [2024-05-16 20:29:28.750325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32698 cdw0:a541b670 sqhd:2530 p:0 m:0 dnr:0 00:21:40.640 [2024-05-16 20:29:28.750333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:188376 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007708000 len:0x1000 key:0x1810ef 00:21:40.640 [2024-05-16 20:29:28.750340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32698 cdw0:a541b670 sqhd:2530 p:0 m:0 dnr:0 00:21:40.640 [2024-05-16 20:29:28.750348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:188384 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007706000 len:0x1000 key:0x1810ef 00:21:40.640 [2024-05-16 20:29:28.750354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32698 cdw0:a541b670 sqhd:2530 p:0 m:0 dnr:0 00:21:40.640 [2024-05-16 20:29:28.750362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:188392 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007704000 len:0x1000 key:0x1810ef 00:21:40.640 [2024-05-16 20:29:28.750368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32698 cdw0:a541b670 sqhd:2530 p:0 m:0 dnr:0 00:21:40.640 [2024-05-16 20:29:28.750376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:188400 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007702000 len:0x1000 key:0x1810ef 00:21:40.640 [2024-05-16 20:29:28.750383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32698 cdw0:a541b670 sqhd:2530 p:0 m:0 dnr:0 00:21:40.640 [2024-05-16 20:29:28.750391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:188408 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007700000 len:0x1000 key:0x1810ef 00:21:40.640 [2024-05-16 20:29:28.750398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32698 cdw0:a541b670 sqhd:2530 p:0 m:0 dnr:0 00:21:40.640 [2024-05-16 20:29:28.750407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:188416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:40.640 [2024-05-16 20:29:28.750413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32698 cdw0:a541b670 sqhd:2530 p:0 m:0 dnr:0 00:21:40.640 [2024-05-16 20:29:28.750421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:188424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:40.640 [2024-05-16 20:29:28.750433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32698 cdw0:a541b670 sqhd:2530 p:0 m:0 dnr:0 00:21:40.640 [2024-05-16 20:29:28.750440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:188432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:40.640 [2024-05-16 20:29:28.750447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32698 cdw0:a541b670 sqhd:2530 p:0 m:0 dnr:0 00:21:40.640 [2024-05-16 20:29:28.750455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:188440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:40.640 [2024-05-16 20:29:28.750462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32698 cdw0:a541b670 sqhd:2530 p:0 m:0 dnr:0 00:21:40.640 [2024-05-16 20:29:28.750472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:188448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:40.640 [2024-05-16 20:29:28.750479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32698 cdw0:a541b670 sqhd:2530 p:0 m:0 dnr:0 00:21:40.640 [2024-05-16 20:29:28.750486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:188456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:40.640 [2024-05-16 20:29:28.750493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32698 cdw0:a541b670 sqhd:2530 p:0 m:0 dnr:0 00:21:40.640 [2024-05-16 20:29:28.750501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:188464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:40.640 [2024-05-16 20:29:28.750508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32698 cdw0:a541b670 sqhd:2530 p:0 m:0 dnr:0 00:21:40.640 [2024-05-16 20:29:28.750516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:188472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:40.640 [2024-05-16 20:29:28.750522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32698 cdw0:a541b670 sqhd:2530 p:0 m:0 dnr:0 00:21:40.640 [2024-05-16 20:29:28.750530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:188480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:40.640 [2024-05-16 20:29:28.750536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32698 cdw0:a541b670 sqhd:2530 p:0 m:0 dnr:0 00:21:40.640 [2024-05-16 20:29:28.750544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:188488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:40.640 [2024-05-16 20:29:28.750550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32698 cdw0:a541b670 sqhd:2530 p:0 m:0 dnr:0 00:21:40.640 [2024-05-16 20:29:28.750558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:188496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:40.640 [2024-05-16 20:29:28.750565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32698 cdw0:a541b670 sqhd:2530 p:0 m:0 dnr:0 00:21:40.640 [2024-05-16 20:29:28.750573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:188504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:40.640 [2024-05-16 20:29:28.750580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32698 cdw0:a541b670 sqhd:2530 p:0 m:0 dnr:0 00:21:40.640 [2024-05-16 20:29:28.750587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:188512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:40.640 [2024-05-16 20:29:28.750594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32698 cdw0:a541b670 sqhd:2530 p:0 m:0 dnr:0 00:21:40.640 [2024-05-16 20:29:28.750601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:188520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:40.640 [2024-05-16 20:29:28.750608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32698 cdw0:a541b670 sqhd:2530 p:0 m:0 dnr:0 00:21:40.640 [2024-05-16 20:29:28.750616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:188528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:40.640 [2024-05-16 20:29:28.750623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32698 cdw0:a541b670 sqhd:2530 p:0 m:0 dnr:0 00:21:40.640 [2024-05-16 20:29:28.750631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:188536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:40.640 [2024-05-16 20:29:28.750637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32698 cdw0:a541b670 sqhd:2530 p:0 m:0 dnr:0 00:21:40.640 [2024-05-16 20:29:28.750645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:188544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:40.640 [2024-05-16 20:29:28.750654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32698 cdw0:a541b670 sqhd:2530 p:0 m:0 dnr:0 00:21:40.640 [2024-05-16 20:29:28.750662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:188552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:40.641 [2024-05-16 20:29:28.750670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32698 cdw0:a541b670 sqhd:2530 p:0 m:0 dnr:0 00:21:40.641 [2024-05-16 20:29:28.750680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:188560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:40.641 [2024-05-16 20:29:28.750688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32698 cdw0:a541b670 sqhd:2530 p:0 m:0 dnr:0 00:21:40.641 [2024-05-16 20:29:28.750699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:188568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:40.641 [2024-05-16 20:29:28.750707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32698 cdw0:a541b670 sqhd:2530 p:0 m:0 dnr:0 00:21:40.641 [2024-05-16 20:29:28.750716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:188576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:40.641 [2024-05-16 20:29:28.750724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32698 cdw0:a541b670 sqhd:2530 p:0 m:0 dnr:0 00:21:40.641 [2024-05-16 20:29:28.750732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:188584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:40.641 [2024-05-16 20:29:28.750739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32698 cdw0:a541b670 sqhd:2530 p:0 m:0 dnr:0 00:21:40.641 [2024-05-16 20:29:28.750748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:188592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:40.641 [2024-05-16 20:29:28.750754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32698 cdw0:a541b670 sqhd:2530 p:0 m:0 dnr:0 00:21:40.641 [2024-05-16 20:29:28.750762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:188600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:40.641 [2024-05-16 20:29:28.750768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32698 cdw0:a541b670 sqhd:2530 p:0 m:0 dnr:0 00:21:40.641 [2024-05-16 20:29:28.750776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:188608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:40.641 [2024-05-16 20:29:28.750783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32698 cdw0:a541b670 sqhd:2530 p:0 m:0 dnr:0 00:21:40.641 [2024-05-16 20:29:28.750791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:188616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:40.641 [2024-05-16 20:29:28.750797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32698 cdw0:a541b670 sqhd:2530 p:0 m:0 dnr:0 00:21:40.641 [2024-05-16 20:29:28.750805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:188624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:40.641 [2024-05-16 20:29:28.750811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32698 cdw0:a541b670 sqhd:2530 p:0 m:0 dnr:0 00:21:40.641 [2024-05-16 20:29:28.750819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:188632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:40.641 [2024-05-16 20:29:28.750825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32698 cdw0:a541b670 sqhd:2530 p:0 m:0 dnr:0 00:21:40.641 [2024-05-16 20:29:28.750833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:188640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:40.641 [2024-05-16 20:29:28.750841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32698 cdw0:a541b670 sqhd:2530 p:0 m:0 dnr:0 00:21:40.641 [2024-05-16 20:29:28.750848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:188648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:40.641 [2024-05-16 20:29:28.750855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32698 cdw0:a541b670 sqhd:2530 p:0 m:0 dnr:0 00:21:40.641 [2024-05-16 20:29:28.750862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:188656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:40.641 [2024-05-16 20:29:28.750868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32698 cdw0:a541b670 sqhd:2530 p:0 m:0 dnr:0 00:21:40.641 [2024-05-16 20:29:28.750876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:188664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:40.641 [2024-05-16 20:29:28.750882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32698 cdw0:a541b670 sqhd:2530 p:0 m:0 dnr:0 00:21:40.641 [2024-05-16 20:29:28.750891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:188672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:40.641 [2024-05-16 20:29:28.750897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32698 cdw0:a541b670 sqhd:2530 p:0 m:0 dnr:0 00:21:40.641 [2024-05-16 20:29:28.750905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:188680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:40.641 [2024-05-16 20:29:28.750911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32698 cdw0:a541b670 sqhd:2530 p:0 m:0 dnr:0 00:21:40.641 [2024-05-16 20:29:28.750919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:188688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:40.641 [2024-05-16 20:29:28.750925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32698 cdw0:a541b670 sqhd:2530 p:0 m:0 dnr:0 00:21:40.641 [2024-05-16 20:29:28.750933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:188696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:40.641 [2024-05-16 20:29:28.750940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32698 cdw0:a541b670 sqhd:2530 p:0 m:0 dnr:0 00:21:40.641 [2024-05-16 20:29:28.750948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:188704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:40.641 [2024-05-16 20:29:28.750955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32698 cdw0:a541b670 sqhd:2530 p:0 m:0 dnr:0 00:21:40.641 [2024-05-16 20:29:28.750962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:188712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:40.641 [2024-05-16 20:29:28.750968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32698 cdw0:a541b670 sqhd:2530 p:0 m:0 dnr:0 00:21:40.641 [2024-05-16 20:29:28.750976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:188720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:40.641 [2024-05-16 20:29:28.750983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32698 cdw0:a541b670 sqhd:2530 p:0 m:0 dnr:0 00:21:40.641 [2024-05-16 20:29:28.750991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:188728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:40.641 [2024-05-16 20:29:28.750997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32698 cdw0:a541b670 sqhd:2530 p:0 m:0 dnr:0 00:21:40.641 [2024-05-16 20:29:28.751005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:188736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:40.641 [2024-05-16 20:29:28.751013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32698 cdw0:a541b670 sqhd:2530 p:0 m:0 dnr:0 00:21:40.641 [2024-05-16 20:29:28.751021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:188744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:40.641 [2024-05-16 20:29:28.751028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32698 cdw0:a541b670 sqhd:2530 p:0 m:0 dnr:0 00:21:40.641 [2024-05-16 20:29:28.751035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:188752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:40.641 [2024-05-16 20:29:28.751042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32698 cdw0:a541b670 sqhd:2530 p:0 m:0 dnr:0 00:21:40.641 [2024-05-16 20:29:28.751050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:188760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:40.641 [2024-05-16 20:29:28.751058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32698 cdw0:a541b670 sqhd:2530 p:0 m:0 dnr:0 00:21:40.641 [2024-05-16 20:29:28.751066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:188768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:40.641 [2024-05-16 20:29:28.751072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32698 cdw0:a541b670 sqhd:2530 p:0 m:0 dnr:0 00:21:40.641 [2024-05-16 20:29:28.751080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:188776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:40.641 [2024-05-16 20:29:28.751087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32698 cdw0:a541b670 sqhd:2530 p:0 m:0 dnr:0 00:21:40.641 [2024-05-16 20:29:28.751094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:188784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:40.641 [2024-05-16 20:29:28.751101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32698 cdw0:a541b670 sqhd:2530 p:0 m:0 dnr:0 00:21:40.641 [2024-05-16 20:29:28.751110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:188792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:40.641 [2024-05-16 20:29:28.751118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32698 cdw0:a541b670 sqhd:2530 p:0 m:0 dnr:0 00:21:40.641 [2024-05-16 20:29:28.751127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:188800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:40.641 [2024-05-16 20:29:28.751134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32698 cdw0:a541b670 sqhd:2530 p:0 m:0 dnr:0 00:21:40.641 [2024-05-16 20:29:28.751142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:188808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:40.642 [2024-05-16 20:29:28.751148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32698 cdw0:a541b670 sqhd:2530 p:0 m:0 dnr:0 00:21:40.642 [2024-05-16 20:29:28.751157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:188816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:40.642 [2024-05-16 20:29:28.751164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32698 cdw0:a541b670 sqhd:2530 p:0 m:0 dnr:0 00:21:40.642 [2024-05-16 20:29:28.751172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:188824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:40.642 [2024-05-16 20:29:28.751179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32698 cdw0:a541b670 sqhd:2530 p:0 m:0 dnr:0 00:21:40.642 [2024-05-16 20:29:28.751187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:188832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:40.642 [2024-05-16 20:29:28.751194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32698 cdw0:a541b670 sqhd:2530 p:0 m:0 dnr:0 00:21:40.642 [2024-05-16 20:29:28.751205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:188840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:40.642 [2024-05-16 20:29:28.751212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32698 cdw0:a541b670 sqhd:2530 p:0 m:0 dnr:0 00:21:40.642 [2024-05-16 20:29:28.751220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:188848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:40.642 [2024-05-16 20:29:28.751227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32698 cdw0:a541b670 sqhd:2530 p:0 m:0 dnr:0 00:21:40.642 [2024-05-16 20:29:28.751234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:188856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:40.642 [2024-05-16 20:29:28.751241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32698 cdw0:a541b670 sqhd:2530 p:0 m:0 dnr:0 00:21:40.642 [2024-05-16 20:29:28.751249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:188864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:40.642 [2024-05-16 20:29:28.751255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32698 cdw0:a541b670 sqhd:2530 p:0 m:0 dnr:0 00:21:40.642 [2024-05-16 20:29:28.751264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:188872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:40.642 [2024-05-16 20:29:28.751271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32698 cdw0:a541b670 sqhd:2530 p:0 m:0 dnr:0 00:21:40.642 [2024-05-16 20:29:28.751279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:188880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:40.642 [2024-05-16 20:29:28.751285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32698 cdw0:a541b670 sqhd:2530 p:0 m:0 dnr:0 00:21:40.642 [2024-05-16 20:29:28.751293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:188888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:40.642 [2024-05-16 20:29:28.751299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32698 cdw0:a541b670 sqhd:2530 p:0 m:0 dnr:0 00:21:40.642 [2024-05-16 20:29:28.751307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:188896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:40.642 [2024-05-16 20:29:28.751313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32698 cdw0:a541b670 sqhd:2530 p:0 m:0 dnr:0 00:21:40.642 [2024-05-16 20:29:28.751321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:188904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:40.642 [2024-05-16 20:29:28.751331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32698 cdw0:a541b670 sqhd:2530 p:0 m:0 dnr:0 00:21:40.642 [2024-05-16 20:29:28.764114] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:40.642 [2024-05-16 20:29:28.764128] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:40.642 [2024-05-16 20:29:28.764135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:188912 len:8 PRP1 0x0 PRP2 0x0 00:21:40.642 [2024-05-16 20:29:28.764142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.642 [2024-05-16 20:29:28.766814] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] resetting controller 00:21:40.642 [2024-05-16 20:29:28.767086] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ADDR_RESOLVED but received RDMA_CM_EVENT_ADDR_ERROR (1) from CM event channel (status = -19) 00:21:40.642 [2024-05-16 20:29:28.767100] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:21:40.642 [2024-05-16 20:29:28.767109] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed040 00:21:40.642 [2024-05-16 20:29:28.767124] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:21:40.642 [2024-05-16 20:29:28.767132] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] in failed state. 00:21:40.642 [2024-05-16 20:29:28.767153] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] Ctrlr is in error state 00:21:40.642 [2024-05-16 20:29:28.767159] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] controller reinitialization failed 00:21:40.642 [2024-05-16 20:29:28.767167] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] already in failed state 00:21:40.642 [2024-05-16 20:29:28.767186] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:40.642 [2024-05-16 20:29:28.767194] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] resetting controller 00:21:40.642 [2024-05-16 20:29:29.769715] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ADDR_RESOLVED but received RDMA_CM_EVENT_ADDR_ERROR (1) from CM event channel (status = -19) 00:21:40.642 [2024-05-16 20:29:29.769752] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:21:40.642 [2024-05-16 20:29:29.769758] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed040 00:21:40.642 [2024-05-16 20:29:29.769776] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:21:40.642 [2024-05-16 20:29:29.769783] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] in failed state. 00:21:40.642 [2024-05-16 20:29:29.769793] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] Ctrlr is in error state 00:21:40.642 [2024-05-16 20:29:29.769799] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] controller reinitialization failed 00:21:40.642 [2024-05-16 20:29:29.769807] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] already in failed state 00:21:40.642 [2024-05-16 20:29:29.769827] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:40.642 [2024-05-16 20:29:29.769835] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] resetting controller 00:21:40.642 [2024-05-16 20:29:30.773468] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ADDR_RESOLVED but received RDMA_CM_EVENT_ADDR_ERROR (1) from CM event channel (status = -19) 00:21:40.642 [2024-05-16 20:29:30.773502] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:21:40.642 [2024-05-16 20:29:30.773509] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed040 00:21:40.642 [2024-05-16 20:29:30.773529] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:21:40.642 [2024-05-16 20:29:30.773537] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] in failed state. 00:21:40.642 [2024-05-16 20:29:30.773555] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] Ctrlr is in error state 00:21:40.642 [2024-05-16 20:29:30.773562] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] controller reinitialization failed 00:21:40.642 [2024-05-16 20:29:30.773569] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] already in failed state 00:21:40.642 [2024-05-16 20:29:30.773590] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:40.642 [2024-05-16 20:29:30.773599] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] resetting controller 00:21:40.642 [2024-05-16 20:29:32.778834] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:21:40.642 [2024-05-16 20:29:32.778877] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed040 00:21:40.642 [2024-05-16 20:29:32.778899] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:21:40.642 [2024-05-16 20:29:32.778907] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] in failed state. 00:21:40.642 [2024-05-16 20:29:32.778918] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] Ctrlr is in error state 00:21:40.642 [2024-05-16 20:29:32.778924] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] controller reinitialization failed 00:21:40.642 [2024-05-16 20:29:32.778931] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] already in failed state 00:21:40.642 [2024-05-16 20:29:32.778953] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:40.642 [2024-05-16 20:29:32.778962] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] resetting controller 00:21:40.642 [2024-05-16 20:29:34.783914] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:21:40.642 [2024-05-16 20:29:34.783942] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed040 00:21:40.642 [2024-05-16 20:29:34.783963] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:21:40.642 [2024-05-16 20:29:34.783971] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] in failed state. 00:21:40.642 [2024-05-16 20:29:34.783981] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] Ctrlr is in error state 00:21:40.642 [2024-05-16 20:29:34.783988] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] controller reinitialization failed 00:21:40.642 [2024-05-16 20:29:34.783995] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] already in failed state 00:21:40.642 [2024-05-16 20:29:34.784017] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:40.642 [2024-05-16 20:29:34.784025] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] resetting controller 00:21:40.642 [2024-05-16 20:29:36.788993] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:21:40.642 [2024-05-16 20:29:36.789026] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed040 00:21:40.642 [2024-05-16 20:29:36.789045] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:21:40.642 [2024-05-16 20:29:36.789053] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] in failed state. 00:21:40.642 [2024-05-16 20:29:36.789063] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] Ctrlr is in error state 00:21:40.642 [2024-05-16 20:29:36.789070] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] controller reinitialization failed 00:21:40.642 [2024-05-16 20:29:36.789077] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] already in failed state 00:21:40.642 [2024-05-16 20:29:36.789097] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:40.642 [2024-05-16 20:29:36.789106] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] resetting controller 00:21:40.642 [2024-05-16 20:29:37.861918] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:40.642 [2024-05-16 20:29:37.957031] rdma_verbs.c: 113:spdk_rdma_qp_disconnect: *ERROR*: rdma_disconnect failed, errno Invalid argument (22) 00:21:40.643 [2024-05-16 20:29:37.957058] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:40.643 [2024-05-16 20:29:37.957067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32698 cdw0:16 sqhd:83b9 p:0 m:0 dnr:0 00:21:40.643 [2024-05-16 20:29:37.957079] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:40.643 [2024-05-16 20:29:37.957085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32698 cdw0:16 sqhd:83b9 p:0 m:0 dnr:0 00:21:40.643 [2024-05-16 20:29:37.957093] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:40.643 [2024-05-16 20:29:37.957102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32698 cdw0:16 sqhd:83b9 p:0 m:0 dnr:0 00:21:40.643 [2024-05-16 20:29:37.957110] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:21:40.643 [2024-05-16 20:29:37.957117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32698 cdw0:16 sqhd:83b9 p:0 m:0 dnr:0 00:21:40.643 [2024-05-16 20:29:37.959274] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:21:40.643 [2024-05-16 20:29:37.959309] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] in failed state. 00:21:40.643 [2024-05-16 20:29:37.959375] rdma_verbs.c: 113:spdk_rdma_qp_disconnect: *ERROR*: rdma_disconnect failed, errno Invalid argument (22) 00:21:40.643 [2024-05-16 20:29:37.967042] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:40.643 [2024-05-16 20:29:37.977069] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:40.643 [2024-05-16 20:29:37.987093] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:40.643 [2024-05-16 20:29:37.997120] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:40.643 [2024-05-16 20:29:38.007146] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:40.643 [2024-05-16 20:29:38.017171] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:40.643 [2024-05-16 20:29:38.027199] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:40.643 [2024-05-16 20:29:38.037227] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:40.643 [2024-05-16 20:29:38.047252] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:40.643 [2024-05-16 20:29:38.057277] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:40.643 [2024-05-16 20:29:38.067303] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:40.643 [2024-05-16 20:29:38.077331] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:40.643 [2024-05-16 20:29:38.087357] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:40.643 [2024-05-16 20:29:38.097382] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:40.643 [2024-05-16 20:29:38.107410] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:40.643 [2024-05-16 20:29:38.117436] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:40.643 [2024-05-16 20:29:38.127463] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:40.643 [2024-05-16 20:29:38.137489] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:40.643 [2024-05-16 20:29:38.147515] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:40.643 [2024-05-16 20:29:38.157540] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:40.643 [2024-05-16 20:29:38.167569] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:40.643 [2024-05-16 20:29:38.177595] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:40.643 [2024-05-16 20:29:38.187621] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:40.643 [2024-05-16 20:29:38.197646] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:40.643 [2024-05-16 20:29:38.207671] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:40.643 [2024-05-16 20:29:38.217698] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:40.643 [2024-05-16 20:29:38.227724] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:40.643 [2024-05-16 20:29:38.237750] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:40.643 [2024-05-16 20:29:38.247775] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:40.643 [2024-05-16 20:29:38.257800] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:40.643 [2024-05-16 20:29:38.267826] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:40.643 [2024-05-16 20:29:38.277852] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:40.643 [2024-05-16 20:29:38.287878] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:40.643 [2024-05-16 20:29:38.297903] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:40.643 [2024-05-16 20:29:38.307930] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:40.643 [2024-05-16 20:29:38.317956] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:40.643 [2024-05-16 20:29:38.327983] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:40.643 [2024-05-16 20:29:38.338008] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:40.643 [2024-05-16 20:29:38.348034] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:40.643 [2024-05-16 20:29:38.358059] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:40.643 [2024-05-16 20:29:38.368086] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:40.643 [2024-05-16 20:29:38.378111] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:40.643 [2024-05-16 20:29:38.388136] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:40.643 [2024-05-16 20:29:38.398163] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:40.643 [2024-05-16 20:29:38.408189] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:40.643 [2024-05-16 20:29:38.418217] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:40.643 [2024-05-16 20:29:38.428243] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:40.643 [2024-05-16 20:29:38.438270] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:40.643 [2024-05-16 20:29:38.448295] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:40.643 [2024-05-16 20:29:38.458322] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:40.643 [2024-05-16 20:29:38.468349] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:40.643 [2024-05-16 20:29:38.478375] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:40.643 [2024-05-16 20:29:38.488400] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:40.643 [2024-05-16 20:29:38.498429] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:40.643 [2024-05-16 20:29:38.508455] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:40.643 [2024-05-16 20:29:38.518483] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:40.643 [2024-05-16 20:29:38.528508] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:40.643 [2024-05-16 20:29:38.538535] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:40.643 [2024-05-16 20:29:38.548563] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:40.643 [2024-05-16 20:29:38.558590] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:40.643 [2024-05-16 20:29:38.568618] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:40.643 [2024-05-16 20:29:38.578642] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:40.643 [2024-05-16 20:29:38.588668] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:40.643 [2024-05-16 20:29:38.598693] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:40.643 [2024-05-16 20:29:38.608720] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:40.643 [2024-05-16 20:29:38.618746] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:40.643 [2024-05-16 20:29:38.628774] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:40.643 [2024-05-16 20:29:38.638798] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:40.643 [2024-05-16 20:29:38.648826] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:40.643 [2024-05-16 20:29:38.658851] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:40.643 [2024-05-16 20:29:38.668880] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:40.643 [2024-05-16 20:29:38.678907] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:40.643 [2024-05-16 20:29:38.688933] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:40.643 [2024-05-16 20:29:38.698957] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:40.643 [2024-05-16 20:29:38.708983] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:40.643 [2024-05-16 20:29:38.719010] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:40.643 [2024-05-16 20:29:38.729033] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:40.643 [2024-05-16 20:29:38.739058] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:40.643 [2024-05-16 20:29:38.749084] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:40.643 [2024-05-16 20:29:38.759109] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:40.643 [2024-05-16 20:29:38.769133] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:40.643 [2024-05-16 20:29:38.779159] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:40.644 [2024-05-16 20:29:38.789184] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:40.644 [2024-05-16 20:29:38.799211] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:40.644 [2024-05-16 20:29:38.809322] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:40.644 [2024-05-16 20:29:38.819358] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:40.644 [2024-05-16 20:29:38.830228] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:40.644 [2024-05-16 20:29:38.840364] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:40.644 [2024-05-16 20:29:38.850698] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:40.644 [2024-05-16 20:29:38.860725] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:40.644 [2024-05-16 20:29:38.872064] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:40.644 [2024-05-16 20:29:38.882162] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:40.644 [2024-05-16 20:29:38.892291] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:40.644 [2024-05-16 20:29:38.902492] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:40.644 [2024-05-16 20:29:38.912509] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:40.644 [2024-05-16 20:29:38.922700] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:40.644 [2024-05-16 20:29:38.932725] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:40.644 [2024-05-16 20:29:38.942882] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:40.644 [2024-05-16 20:29:38.952910] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:40.644 [2024-05-16 20:29:38.961893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:7480 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079b0000 len:0x1000 key:0x136a69 00:21:40.644 [2024-05-16 20:29:38.961911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32698 cdw0:a541b670 sqhd:2530 p:0 m:0 dnr:0 00:21:40.644 [2024-05-16 20:29:38.961926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:7488 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079ae000 len:0x1000 key:0x136a69 00:21:40.644 [2024-05-16 20:29:38.961934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32698 cdw0:a541b670 sqhd:2530 p:0 m:0 dnr:0 00:21:40.644 [2024-05-16 20:29:38.961943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:7496 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079ac000 len:0x1000 key:0x136a69 00:21:40.644 [2024-05-16 20:29:38.961949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32698 cdw0:a541b670 sqhd:2530 p:0 m:0 dnr:0 00:21:40.644 [2024-05-16 20:29:38.961958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:7504 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079aa000 len:0x1000 key:0x136a69 00:21:40.644 [2024-05-16 20:29:38.961967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32698 cdw0:a541b670 sqhd:2530 p:0 m:0 dnr:0 00:21:40.644 [2024-05-16 20:29:38.961975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:7512 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079a8000 len:0x1000 key:0x136a69 00:21:40.644 [2024-05-16 20:29:38.961982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32698 cdw0:a541b670 sqhd:2530 p:0 m:0 dnr:0 00:21:40.644 [2024-05-16 20:29:38.961990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7520 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079a6000 len:0x1000 key:0x136a69 00:21:40.644 [2024-05-16 20:29:38.961996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32698 cdw0:a541b670 sqhd:2530 p:0 m:0 dnr:0 00:21:40.644 [2024-05-16 20:29:38.962004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:7528 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079a4000 len:0x1000 key:0x136a69 00:21:40.644 [2024-05-16 20:29:38.962011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32698 cdw0:a541b670 sqhd:2530 p:0 m:0 dnr:0 00:21:40.644 [2024-05-16 20:29:38.962019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:7536 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079a2000 len:0x1000 key:0x136a69 00:21:40.644 [2024-05-16 20:29:38.962025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32698 cdw0:a541b670 sqhd:2530 p:0 m:0 dnr:0 00:21:40.644 [2024-05-16 20:29:38.962033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:7544 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079a0000 len:0x1000 key:0x136a69 00:21:40.644 [2024-05-16 20:29:38.962040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32698 cdw0:a541b670 sqhd:2530 p:0 m:0 dnr:0 00:21:40.644 [2024-05-16 20:29:38.962047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7552 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000799e000 len:0x1000 key:0x136a69 00:21:40.644 [2024-05-16 20:29:38.962054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32698 cdw0:a541b670 sqhd:2530 p:0 m:0 dnr:0 00:21:40.644 [2024-05-16 20:29:38.962061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:7560 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000799c000 len:0x1000 key:0x136a69 00:21:40.644 [2024-05-16 20:29:38.962068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32698 cdw0:a541b670 sqhd:2530 p:0 m:0 dnr:0 00:21:40.644 [2024-05-16 20:29:38.962075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:7568 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000799a000 len:0x1000 key:0x136a69 00:21:40.644 [2024-05-16 20:29:38.962082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32698 cdw0:a541b670 sqhd:2530 p:0 m:0 dnr:0 00:21:40.644 [2024-05-16 20:29:38.962089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:7576 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007998000 len:0x1000 key:0x136a69 00:21:40.644 [2024-05-16 20:29:38.962096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32698 cdw0:a541b670 sqhd:2530 p:0 m:0 dnr:0 00:21:40.644 [2024-05-16 20:29:38.962104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:7584 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007996000 len:0x1000 key:0x136a69 00:21:40.644 [2024-05-16 20:29:38.962110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32698 cdw0:a541b670 sqhd:2530 p:0 m:0 dnr:0 00:21:40.644 [2024-05-16 20:29:38.962117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7592 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007994000 len:0x1000 key:0x136a69 00:21:40.644 [2024-05-16 20:29:38.962124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32698 cdw0:a541b670 sqhd:2530 p:0 m:0 dnr:0 00:21:40.644 [2024-05-16 20:29:38.962133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:7600 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007992000 len:0x1000 key:0x136a69 00:21:40.644 [2024-05-16 20:29:38.962139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32698 cdw0:a541b670 sqhd:2530 p:0 m:0 dnr:0 00:21:40.644 [2024-05-16 20:29:38.962147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:7608 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007990000 len:0x1000 key:0x136a69 00:21:40.644 [2024-05-16 20:29:38.962154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32698 cdw0:a541b670 sqhd:2530 p:0 m:0 dnr:0 00:21:40.644 [2024-05-16 20:29:38.962162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:7616 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000798e000 len:0x1000 key:0x136a69 00:21:40.644 [2024-05-16 20:29:38.962168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32698 cdw0:a541b670 sqhd:2530 p:0 m:0 dnr:0 00:21:40.644 [2024-05-16 20:29:38.962176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:7624 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000798c000 len:0x1000 key:0x136a69 00:21:40.644 [2024-05-16 20:29:38.962182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32698 cdw0:a541b670 sqhd:2530 p:0 m:0 dnr:0 00:21:40.644 [2024-05-16 20:29:38.962190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:7632 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000798a000 len:0x1000 key:0x136a69 00:21:40.644 [2024-05-16 20:29:38.962197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32698 cdw0:a541b670 sqhd:2530 p:0 m:0 dnr:0 00:21:40.644 [2024-05-16 20:29:38.962204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:7640 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007988000 len:0x1000 key:0x136a69 00:21:40.644 [2024-05-16 20:29:38.962210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32698 cdw0:a541b670 sqhd:2530 p:0 m:0 dnr:0 00:21:40.644 [2024-05-16 20:29:38.962218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:7648 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007986000 len:0x1000 key:0x136a69 00:21:40.644 [2024-05-16 20:29:38.962224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32698 cdw0:a541b670 sqhd:2530 p:0 m:0 dnr:0 00:21:40.644 [2024-05-16 20:29:38.962232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:7656 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007984000 len:0x1000 key:0x136a69 00:21:40.644 [2024-05-16 20:29:38.962238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32698 cdw0:a541b670 sqhd:2530 p:0 m:0 dnr:0 00:21:40.644 [2024-05-16 20:29:38.962246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:7664 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007982000 len:0x1000 key:0x136a69 00:21:40.644 [2024-05-16 20:29:38.962252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32698 cdw0:a541b670 sqhd:2530 p:0 m:0 dnr:0 00:21:40.644 [2024-05-16 20:29:38.962260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:7672 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007980000 len:0x1000 key:0x136a69 00:21:40.644 [2024-05-16 20:29:38.962267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32698 cdw0:a541b670 sqhd:2530 p:0 m:0 dnr:0 00:21:40.644 [2024-05-16 20:29:38.962274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:7680 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000797e000 len:0x1000 key:0x136a69 00:21:40.644 [2024-05-16 20:29:38.962281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32698 cdw0:a541b670 sqhd:2530 p:0 m:0 dnr:0 00:21:40.644 [2024-05-16 20:29:38.962290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:7688 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000797c000 len:0x1000 key:0x136a69 00:21:40.644 [2024-05-16 20:29:38.962296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32698 cdw0:a541b670 sqhd:2530 p:0 m:0 dnr:0 00:21:40.644 [2024-05-16 20:29:38.962304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7696 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000797a000 len:0x1000 key:0x136a69 00:21:40.644 [2024-05-16 20:29:38.962310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32698 cdw0:a541b670 sqhd:2530 p:0 m:0 dnr:0 00:21:40.644 [2024-05-16 20:29:38.962318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:7704 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007978000 len:0x1000 key:0x136a69 00:21:40.644 [2024-05-16 20:29:38.962324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32698 cdw0:a541b670 sqhd:2530 p:0 m:0 dnr:0 00:21:40.644 [2024-05-16 20:29:38.962332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:7712 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007976000 len:0x1000 key:0x136a69 00:21:40.644 [2024-05-16 20:29:38.962338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32698 cdw0:a541b670 sqhd:2530 p:0 m:0 dnr:0 00:21:40.644 [2024-05-16 20:29:38.962346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:7720 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007974000 len:0x1000 key:0x136a69 00:21:40.644 [2024-05-16 20:29:38.962352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32698 cdw0:a541b670 sqhd:2530 p:0 m:0 dnr:0 00:21:40.645 [2024-05-16 20:29:38.962360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7728 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007972000 len:0x1000 key:0x136a69 00:21:40.645 [2024-05-16 20:29:38.962366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32698 cdw0:a541b670 sqhd:2530 p:0 m:0 dnr:0 00:21:40.645 [2024-05-16 20:29:38.962375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:7736 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007970000 len:0x1000 key:0x136a69 00:21:40.645 [2024-05-16 20:29:38.962381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32698 cdw0:a541b670 sqhd:2530 p:0 m:0 dnr:0 00:21:40.645 [2024-05-16 20:29:38.962388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:7744 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000796e000 len:0x1000 key:0x136a69 00:21:40.645 [2024-05-16 20:29:38.962395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32698 cdw0:a541b670 sqhd:2530 p:0 m:0 dnr:0 00:21:40.645 [2024-05-16 20:29:38.962403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:7752 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000796c000 len:0x1000 key:0x136a69 00:21:40.645 [2024-05-16 20:29:38.962409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32698 cdw0:a541b670 sqhd:2530 p:0 m:0 dnr:0 00:21:40.645 [2024-05-16 20:29:38.962416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:7760 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000796a000 len:0x1000 key:0x136a69 00:21:40.645 [2024-05-16 20:29:38.962428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32698 cdw0:a541b670 sqhd:2530 p:0 m:0 dnr:0 00:21:40.645 [2024-05-16 20:29:38.962436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:7768 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007968000 len:0x1000 key:0x136a69 00:21:40.645 [2024-05-16 20:29:38.962442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32698 cdw0:a541b670 sqhd:2530 p:0 m:0 dnr:0 00:21:40.645 [2024-05-16 20:29:38.962466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:7776 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007966000 len:0x1000 key:0x136a69 00:21:40.645 [2024-05-16 20:29:38.962477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32698 cdw0:a541b670 sqhd:2530 p:0 m:0 dnr:0 00:21:40.645 [2024-05-16 20:29:38.962486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:7784 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007964000 len:0x1000 key:0x136a69 00:21:40.645 [2024-05-16 20:29:38.962492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32698 cdw0:a541b670 sqhd:2530 p:0 m:0 dnr:0 00:21:40.645 [2024-05-16 20:29:38.962500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:7792 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007962000 len:0x1000 key:0x136a69 00:21:40.645 [2024-05-16 20:29:38.962506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32698 cdw0:a541b670 sqhd:2530 p:0 m:0 dnr:0 00:21:40.645 [2024-05-16 20:29:38.962514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:7800 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007960000 len:0x1000 key:0x136a69 00:21:40.645 [2024-05-16 20:29:38.962520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32698 cdw0:a541b670 sqhd:2530 p:0 m:0 dnr:0 00:21:40.645 [2024-05-16 20:29:38.962528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7808 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000795e000 len:0x1000 key:0x136a69 00:21:40.645 [2024-05-16 20:29:38.962534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32698 cdw0:a541b670 sqhd:2530 p:0 m:0 dnr:0 00:21:40.645 [2024-05-16 20:29:38.962542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:7816 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000795c000 len:0x1000 key:0x136a69 00:21:40.645 [2024-05-16 20:29:38.962549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32698 cdw0:a541b670 sqhd:2530 p:0 m:0 dnr:0 00:21:40.645 [2024-05-16 20:29:38.962557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:7824 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000795a000 len:0x1000 key:0x136a69 00:21:40.645 [2024-05-16 20:29:38.962563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32698 cdw0:a541b670 sqhd:2530 p:0 m:0 dnr:0 00:21:40.645 [2024-05-16 20:29:38.962571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:7832 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007958000 len:0x1000 key:0x136a69 00:21:40.645 [2024-05-16 20:29:38.962580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32698 cdw0:a541b670 sqhd:2530 p:0 m:0 dnr:0 00:21:40.645 [2024-05-16 20:29:38.962589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:7840 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007956000 len:0x1000 key:0x136a69 00:21:40.645 [2024-05-16 20:29:38.962596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32698 cdw0:a541b670 sqhd:2530 p:0 m:0 dnr:0 00:21:40.645 [2024-05-16 20:29:38.962604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7848 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007954000 len:0x1000 key:0x136a69 00:21:40.645 [2024-05-16 20:29:38.962611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32698 cdw0:a541b670 sqhd:2530 p:0 m:0 dnr:0 00:21:40.645 [2024-05-16 20:29:38.962619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:7856 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007952000 len:0x1000 key:0x136a69 00:21:40.645 [2024-05-16 20:29:38.962625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32698 cdw0:a541b670 sqhd:2530 p:0 m:0 dnr:0 00:21:40.645 [2024-05-16 20:29:38.962633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:7864 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007950000 len:0x1000 key:0x136a69 00:21:40.645 [2024-05-16 20:29:38.962641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32698 cdw0:a541b670 sqhd:2530 p:0 m:0 dnr:0 00:21:40.645 [2024-05-16 20:29:38.962649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:7872 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000794e000 len:0x1000 key:0x136a69 00:21:40.645 [2024-05-16 20:29:38.962656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32698 cdw0:a541b670 sqhd:2530 p:0 m:0 dnr:0 00:21:40.645 [2024-05-16 20:29:38.962664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:7880 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000794c000 len:0x1000 key:0x136a69 00:21:40.645 [2024-05-16 20:29:38.962670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32698 cdw0:a541b670 sqhd:2530 p:0 m:0 dnr:0 00:21:40.645 [2024-05-16 20:29:38.962678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7888 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000794a000 len:0x1000 key:0x136a69 00:21:40.645 [2024-05-16 20:29:38.962684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32698 cdw0:a541b670 sqhd:2530 p:0 m:0 dnr:0 00:21:40.645 [2024-05-16 20:29:38.962693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:7896 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007948000 len:0x1000 key:0x136a69 00:21:40.645 [2024-05-16 20:29:38.962699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32698 cdw0:a541b670 sqhd:2530 p:0 m:0 dnr:0 00:21:40.645 [2024-05-16 20:29:38.962707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:7904 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007946000 len:0x1000 key:0x136a69 00:21:40.645 [2024-05-16 20:29:38.962713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32698 cdw0:a541b670 sqhd:2530 p:0 m:0 dnr:0 00:21:40.645 [2024-05-16 20:29:38.962721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:7912 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007944000 len:0x1000 key:0x136a69 00:21:40.645 [2024-05-16 20:29:38.962728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32698 cdw0:a541b670 sqhd:2530 p:0 m:0 dnr:0 00:21:40.645 [2024-05-16 20:29:38.962736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:7920 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007942000 len:0x1000 key:0x136a69 00:21:40.645 [2024-05-16 20:29:38.962742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32698 cdw0:a541b670 sqhd:2530 p:0 m:0 dnr:0 00:21:40.645 [2024-05-16 20:29:38.962750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:7928 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007940000 len:0x1000 key:0x136a69 00:21:40.645 [2024-05-16 20:29:38.962757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32698 cdw0:a541b670 sqhd:2530 p:0 m:0 dnr:0 00:21:40.645 [2024-05-16 20:29:38.962765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:7936 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000793e000 len:0x1000 key:0x136a69 00:21:40.645 [2024-05-16 20:29:38.962771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32698 cdw0:a541b670 sqhd:2530 p:0 m:0 dnr:0 00:21:40.645 [2024-05-16 20:29:38.962779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:7944 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000793c000 len:0x1000 key:0x136a69 00:21:40.645 [2024-05-16 20:29:38.962785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32698 cdw0:a541b670 sqhd:2530 p:0 m:0 dnr:0 00:21:40.645 [2024-05-16 20:29:38.962792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:7952 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000793a000 len:0x1000 key:0x136a69 00:21:40.645 [2024-05-16 20:29:38.962799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32698 cdw0:a541b670 sqhd:2530 p:0 m:0 dnr:0 00:21:40.645 [2024-05-16 20:29:38.962808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:7960 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007938000 len:0x1000 key:0x136a69 00:21:40.645 [2024-05-16 20:29:38.962814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32698 cdw0:a541b670 sqhd:2530 p:0 m:0 dnr:0 00:21:40.645 [2024-05-16 20:29:38.962822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:7968 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007936000 len:0x1000 key:0x136a69 00:21:40.645 [2024-05-16 20:29:38.962829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32698 cdw0:a541b670 sqhd:2530 p:0 m:0 dnr:0 00:21:40.645 [2024-05-16 20:29:38.962837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:7976 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007934000 len:0x1000 key:0x136a69 00:21:40.645 [2024-05-16 20:29:38.962843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32698 cdw0:a541b670 sqhd:2530 p:0 m:0 dnr:0 00:21:40.645 [2024-05-16 20:29:38.962851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:7984 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007932000 len:0x1000 key:0x136a69 00:21:40.645 [2024-05-16 20:29:38.962858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32698 cdw0:a541b670 sqhd:2530 p:0 m:0 dnr:0 00:21:40.645 [2024-05-16 20:29:38.962865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:7992 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007930000 len:0x1000 key:0x136a69 00:21:40.645 [2024-05-16 20:29:38.962871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32698 cdw0:a541b670 sqhd:2530 p:0 m:0 dnr:0 00:21:40.645 [2024-05-16 20:29:38.962879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:8000 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000792e000 len:0x1000 key:0x136a69 00:21:40.645 [2024-05-16 20:29:38.962885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32698 cdw0:a541b670 sqhd:2530 p:0 m:0 dnr:0 00:21:40.645 [2024-05-16 20:29:38.962893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:8008 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000792c000 len:0x1000 key:0x136a69 00:21:40.645 [2024-05-16 20:29:38.962899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32698 cdw0:a541b670 sqhd:2530 p:0 m:0 dnr:0 00:21:40.645 [2024-05-16 20:29:38.962907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:8016 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000792a000 len:0x1000 key:0x136a69 00:21:40.645 [2024-05-16 20:29:38.962914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32698 cdw0:a541b670 sqhd:2530 p:0 m:0 dnr:0 00:21:40.645 [2024-05-16 20:29:38.962922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:8024 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007928000 len:0x1000 key:0x136a69 00:21:40.646 [2024-05-16 20:29:38.962928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32698 cdw0:a541b670 sqhd:2530 p:0 m:0 dnr:0 00:21:40.646 [2024-05-16 20:29:38.962936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:8032 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007926000 len:0x1000 key:0x136a69 00:21:40.646 [2024-05-16 20:29:38.962942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32698 cdw0:a541b670 sqhd:2530 p:0 m:0 dnr:0 00:21:40.646 [2024-05-16 20:29:38.962951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:8040 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007924000 len:0x1000 key:0x136a69 00:21:40.646 [2024-05-16 20:29:38.962958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32698 cdw0:a541b670 sqhd:2530 p:0 m:0 dnr:0 00:21:40.646 [2024-05-16 20:29:38.962967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:8048 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007922000 len:0x1000 key:0x136a69 00:21:40.646 [2024-05-16 20:29:38.962974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32698 cdw0:a541b670 sqhd:2530 p:0 m:0 dnr:0 00:21:40.646 [2024-05-16 20:29:38.962982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8056 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007920000 len:0x1000 key:0x136a69 00:21:40.646 [2024-05-16 20:29:38.962988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32698 cdw0:a541b670 sqhd:2530 p:0 m:0 dnr:0 00:21:40.646 [2024-05-16 20:29:38.962996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:8064 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000791e000 len:0x1000 key:0x136a69 00:21:40.646 [2024-05-16 20:29:38.963003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32698 cdw0:a541b670 sqhd:2530 p:0 m:0 dnr:0 00:21:40.646 [2024-05-16 20:29:38.963011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:8072 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000791c000 len:0x1000 key:0x136a69 00:21:40.646 [2024-05-16 20:29:38.963018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32698 cdw0:a541b670 sqhd:2530 p:0 m:0 dnr:0 00:21:40.646 [2024-05-16 20:29:38.963026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8080 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000791a000 len:0x1000 key:0x136a69 00:21:40.646 [2024-05-16 20:29:38.963032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32698 cdw0:a541b670 sqhd:2530 p:0 m:0 dnr:0 00:21:40.646 [2024-05-16 20:29:38.963040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:8088 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007918000 len:0x1000 key:0x136a69 00:21:40.646 [2024-05-16 20:29:38.963048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32698 cdw0:a541b670 sqhd:2530 p:0 m:0 dnr:0 00:21:40.646 [2024-05-16 20:29:38.963056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:8096 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007916000 len:0x1000 key:0x136a69 00:21:40.646 [2024-05-16 20:29:38.963062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32698 cdw0:a541b670 sqhd:2530 p:0 m:0 dnr:0 00:21:40.646 [2024-05-16 20:29:38.963070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:8104 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007914000 len:0x1000 key:0x136a69 00:21:40.646 [2024-05-16 20:29:38.963077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32698 cdw0:a541b670 sqhd:2530 p:0 m:0 dnr:0 00:21:40.646 [2024-05-16 20:29:38.963085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:8112 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007912000 len:0x1000 key:0x136a69 00:21:40.646 [2024-05-16 20:29:38.963092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32698 cdw0:a541b670 sqhd:2530 p:0 m:0 dnr:0 00:21:40.646 [2024-05-16 20:29:38.963099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:8120 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007910000 len:0x1000 key:0x136a69 00:21:40.646 [2024-05-16 20:29:38.963106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32698 cdw0:a541b670 sqhd:2530 p:0 m:0 dnr:0 00:21:40.646 [2024-05-16 20:29:38.963114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:8128 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000790e000 len:0x1000 key:0x136a69 00:21:40.646 [2024-05-16 20:29:38.963120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32698 cdw0:a541b670 sqhd:2530 p:0 m:0 dnr:0 00:21:40.646 [2024-05-16 20:29:38.963128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:8136 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000790c000 len:0x1000 key:0x136a69 00:21:40.646 [2024-05-16 20:29:38.963137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32698 cdw0:a541b670 sqhd:2530 p:0 m:0 dnr:0 00:21:40.646 [2024-05-16 20:29:38.963145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:8144 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000790a000 len:0x1000 key:0x136a69 00:21:40.646 [2024-05-16 20:29:38.963152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32698 cdw0:a541b670 sqhd:2530 p:0 m:0 dnr:0 00:21:40.646 [2024-05-16 20:29:38.963160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:8152 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007908000 len:0x1000 key:0x136a69 00:21:40.646 [2024-05-16 20:29:38.963166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32698 cdw0:a541b670 sqhd:2530 p:0 m:0 dnr:0 00:21:40.646 [2024-05-16 20:29:38.963174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:8160 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007906000 len:0x1000 key:0x136a69 00:21:40.646 [2024-05-16 20:29:38.963181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32698 cdw0:a541b670 sqhd:2530 p:0 m:0 dnr:0 00:21:40.646 [2024-05-16 20:29:38.963189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:8168 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007904000 len:0x1000 key:0x136a69 00:21:40.646 [2024-05-16 20:29:38.963195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32698 cdw0:a541b670 sqhd:2530 p:0 m:0 dnr:0 00:21:40.646 [2024-05-16 20:29:38.963203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:8176 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007902000 len:0x1000 key:0x136a69 00:21:40.646 [2024-05-16 20:29:38.963209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32698 cdw0:a541b670 sqhd:2530 p:0 m:0 dnr:0 00:21:40.646 [2024-05-16 20:29:38.963217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:8184 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007900000 len:0x1000 key:0x136a69 00:21:40.646 [2024-05-16 20:29:38.963223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32698 cdw0:a541b670 sqhd:2530 p:0 m:0 dnr:0 00:21:40.646 [2024-05-16 20:29:38.963231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:8192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:40.646 [2024-05-16 20:29:38.963238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32698 cdw0:a541b670 sqhd:2530 p:0 m:0 dnr:0 00:21:40.646 [2024-05-16 20:29:38.963246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:8200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:40.646 [2024-05-16 20:29:38.963253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32698 cdw0:a541b670 sqhd:2530 p:0 m:0 dnr:0 00:21:40.646 [2024-05-16 20:29:38.963261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:8208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:40.646 [2024-05-16 20:29:38.963268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32698 cdw0:a541b670 sqhd:2530 p:0 m:0 dnr:0 00:21:40.646 [2024-05-16 20:29:38.963275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:8216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:40.646 [2024-05-16 20:29:38.963283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32698 cdw0:a541b670 sqhd:2530 p:0 m:0 dnr:0 00:21:40.646 [2024-05-16 20:29:38.963290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:8224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:40.646 [2024-05-16 20:29:38.963297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32698 cdw0:a541b670 sqhd:2530 p:0 m:0 dnr:0 00:21:40.646 [2024-05-16 20:29:38.963306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:40.646 [2024-05-16 20:29:38.963313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32698 cdw0:a541b670 sqhd:2530 p:0 m:0 dnr:0 00:21:40.646 [2024-05-16 20:29:38.963321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:8240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:40.646 [2024-05-16 20:29:38.963327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32698 cdw0:a541b670 sqhd:2530 p:0 m:0 dnr:0 00:21:40.646 [2024-05-16 20:29:38.963335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:8248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:40.646 [2024-05-16 20:29:38.963341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32698 cdw0:a541b670 sqhd:2530 p:0 m:0 dnr:0 00:21:40.646 [2024-05-16 20:29:38.963349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:8256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:40.646 [2024-05-16 20:29:38.963356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32698 cdw0:a541b670 sqhd:2530 p:0 m:0 dnr:0 00:21:40.646 [2024-05-16 20:29:38.963364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:8264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:40.646 [2024-05-16 20:29:38.963370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32698 cdw0:a541b670 sqhd:2530 p:0 m:0 dnr:0 00:21:40.646 [2024-05-16 20:29:38.963378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:8272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:40.646 [2024-05-16 20:29:38.963385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32698 cdw0:a541b670 sqhd:2530 p:0 m:0 dnr:0 00:21:40.646 [2024-05-16 20:29:38.963393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:8280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:40.646 [2024-05-16 20:29:38.963400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32698 cdw0:a541b670 sqhd:2530 p:0 m:0 dnr:0 00:21:40.647 [2024-05-16 20:29:38.963407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:8288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:40.647 [2024-05-16 20:29:38.963414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32698 cdw0:a541b670 sqhd:2530 p:0 m:0 dnr:0 00:21:40.647 [2024-05-16 20:29:38.963426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:8296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:40.647 [2024-05-16 20:29:38.963433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32698 cdw0:a541b670 sqhd:2530 p:0 m:0 dnr:0 00:21:40.647 [2024-05-16 20:29:38.963442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:8304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:40.647 [2024-05-16 20:29:38.963449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32698 cdw0:a541b670 sqhd:2530 p:0 m:0 dnr:0 00:21:40.647 [2024-05-16 20:29:38.963457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:8312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:40.647 [2024-05-16 20:29:38.963464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32698 cdw0:a541b670 sqhd:2530 p:0 m:0 dnr:0 00:21:40.647 [2024-05-16 20:29:38.963472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:8320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:40.647 [2024-05-16 20:29:38.963479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32698 cdw0:a541b670 sqhd:2530 p:0 m:0 dnr:0 00:21:40.647 [2024-05-16 20:29:38.963487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:8328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:40.647 [2024-05-16 20:29:38.963495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32698 cdw0:a541b670 sqhd:2530 p:0 m:0 dnr:0 00:21:40.647 [2024-05-16 20:29:38.963503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:8336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:40.647 [2024-05-16 20:29:38.963510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32698 cdw0:a541b670 sqhd:2530 p:0 m:0 dnr:0 00:21:40.647 [2024-05-16 20:29:38.963518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:8344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:40.647 [2024-05-16 20:29:38.963524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32698 cdw0:a541b670 sqhd:2530 p:0 m:0 dnr:0 00:21:40.647 [2024-05-16 20:29:38.963532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:8352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:40.647 [2024-05-16 20:29:38.963538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32698 cdw0:a541b670 sqhd:2530 p:0 m:0 dnr:0 00:21:40.647 [2024-05-16 20:29:38.963547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:8360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:40.647 [2024-05-16 20:29:38.963554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32698 cdw0:a541b670 sqhd:2530 p:0 m:0 dnr:0 00:21:40.647 [2024-05-16 20:29:38.963562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:8368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:40.647 [2024-05-16 20:29:38.963568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32698 cdw0:a541b670 sqhd:2530 p:0 m:0 dnr:0 00:21:40.647 [2024-05-16 20:29:38.963576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:8376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:40.647 [2024-05-16 20:29:38.963583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32698 cdw0:a541b670 sqhd:2530 p:0 m:0 dnr:0 00:21:40.647 [2024-05-16 20:29:38.963591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:8384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:40.647 [2024-05-16 20:29:38.963597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32698 cdw0:a541b670 sqhd:2530 p:0 m:0 dnr:0 00:21:40.647 [2024-05-16 20:29:38.963605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:8392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:40.647 [2024-05-16 20:29:38.963611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32698 cdw0:a541b670 sqhd:2530 p:0 m:0 dnr:0 00:21:40.647 [2024-05-16 20:29:38.963620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:8400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:40.647 [2024-05-16 20:29:38.963626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32698 cdw0:a541b670 sqhd:2530 p:0 m:0 dnr:0 00:21:40.647 [2024-05-16 20:29:38.963634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:8408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:40.647 [2024-05-16 20:29:38.963641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32698 cdw0:a541b670 sqhd:2530 p:0 m:0 dnr:0 00:21:40.647 [2024-05-16 20:29:38.963648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:8416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:40.647 [2024-05-16 20:29:38.963655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32698 cdw0:a541b670 sqhd:2530 p:0 m:0 dnr:0 00:21:40.647 [2024-05-16 20:29:38.963663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:8424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:40.647 [2024-05-16 20:29:38.963672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32698 cdw0:a541b670 sqhd:2530 p:0 m:0 dnr:0 00:21:40.647 [2024-05-16 20:29:38.963680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:8432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:40.647 [2024-05-16 20:29:38.963687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32698 cdw0:a541b670 sqhd:2530 p:0 m:0 dnr:0 00:21:40.647 [2024-05-16 20:29:38.963695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:8440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:40.647 [2024-05-16 20:29:38.963701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32698 cdw0:a541b670 sqhd:2530 p:0 m:0 dnr:0 00:21:40.647 [2024-05-16 20:29:38.963709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:8448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:40.647 [2024-05-16 20:29:38.963716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32698 cdw0:a541b670 sqhd:2530 p:0 m:0 dnr:0 00:21:40.647 [2024-05-16 20:29:38.963724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:8456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:40.647 [2024-05-16 20:29:38.963730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32698 cdw0:a541b670 sqhd:2530 p:0 m:0 dnr:0 00:21:40.647 [2024-05-16 20:29:38.963738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:8464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:40.647 [2024-05-16 20:29:38.963744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32698 cdw0:a541b670 sqhd:2530 p:0 m:0 dnr:0 00:21:40.647 [2024-05-16 20:29:38.963752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:8472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:40.647 [2024-05-16 20:29:38.963758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32698 cdw0:a541b670 sqhd:2530 p:0 m:0 dnr:0 00:21:40.647 [2024-05-16 20:29:38.963766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:8480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:40.647 [2024-05-16 20:29:38.963773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32698 cdw0:a541b670 sqhd:2530 p:0 m:0 dnr:0 00:21:40.647 [2024-05-16 20:29:38.963782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:8488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:40.647 [2024-05-16 20:29:38.963788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32698 cdw0:a541b670 sqhd:2530 p:0 m:0 dnr:0 00:21:40.647 [2024-05-16 20:29:38.976533] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:40.647 [2024-05-16 20:29:38.976544] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:40.647 [2024-05-16 20:29:38.976551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8496 len:8 PRP1 0x0 PRP2 0x0 00:21:40.647 [2024-05-16 20:29:38.976559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.647 [2024-05-16 20:29:38.976602] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] resetting controller 00:21:40.647 [2024-05-16 20:29:38.979120] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ADDR_RESOLVED but received RDMA_CM_EVENT_ADDR_ERROR (1) from CM event channel (status = -19) 00:21:40.647 [2024-05-16 20:29:38.979136] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:21:40.647 [2024-05-16 20:29:38.979142] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192e4280 00:21:40.647 [2024-05-16 20:29:38.979158] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:21:40.647 [2024-05-16 20:29:38.979168] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] in failed state. 00:21:40.647 [2024-05-16 20:29:38.979179] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] Ctrlr is in error state 00:21:40.647 [2024-05-16 20:29:38.979202] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] controller reinitialization failed 00:21:40.647 [2024-05-16 20:29:38.979210] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] already in failed state 00:21:40.647 [2024-05-16 20:29:38.979230] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:40.647 [2024-05-16 20:29:38.979238] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] resetting controller 00:21:40.647 [2024-05-16 20:29:39.984602] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ADDR_RESOLVED but received RDMA_CM_EVENT_ADDR_ERROR (1) from CM event channel (status = -19) 00:21:40.647 [2024-05-16 20:29:39.984641] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:21:40.647 [2024-05-16 20:29:39.984649] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192e4280 00:21:40.647 [2024-05-16 20:29:39.984667] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:21:40.647 [2024-05-16 20:29:39.984675] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] in failed state. 00:21:40.647 [2024-05-16 20:29:39.984707] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] Ctrlr is in error state 00:21:40.647 [2024-05-16 20:29:39.984715] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] controller reinitialization failed 00:21:40.647 [2024-05-16 20:29:39.984722] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] already in failed state 00:21:40.647 [2024-05-16 20:29:39.984753] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:40.647 [2024-05-16 20:29:39.984761] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] resetting controller 00:21:40.647 [2024-05-16 20:29:40.987284] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ADDR_RESOLVED but received RDMA_CM_EVENT_ADDR_ERROR (1) from CM event channel (status = -19) 00:21:40.647 [2024-05-16 20:29:40.987320] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:21:40.647 [2024-05-16 20:29:40.987327] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192e4280 00:21:40.647 [2024-05-16 20:29:40.987344] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:21:40.647 [2024-05-16 20:29:40.987352] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] in failed state. 00:21:40.647 [2024-05-16 20:29:40.987362] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] Ctrlr is in error state 00:21:40.647 [2024-05-16 20:29:40.987369] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] controller reinitialization failed 00:21:40.648 [2024-05-16 20:29:40.987376] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] already in failed state 00:21:40.648 [2024-05-16 20:29:40.987398] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:40.648 [2024-05-16 20:29:40.987406] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] resetting controller 00:21:40.648 [2024-05-16 20:29:42.992790] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:21:40.648 [2024-05-16 20:29:42.992829] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192e4280 00:21:40.648 [2024-05-16 20:29:42.992852] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:21:40.648 [2024-05-16 20:29:42.992867] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] in failed state. 00:21:40.648 [2024-05-16 20:29:42.993715] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] Ctrlr is in error state 00:21:40.648 [2024-05-16 20:29:42.993727] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] controller reinitialization failed 00:21:40.648 [2024-05-16 20:29:42.993735] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] already in failed state 00:21:40.648 [2024-05-16 20:29:42.993783] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:40.648 [2024-05-16 20:29:42.993792] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] resetting controller 00:21:40.648 [2024-05-16 20:29:44.999492] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:21:40.648 [2024-05-16 20:29:44.999530] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192e4280 00:21:40.648 [2024-05-16 20:29:44.999554] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:21:40.648 [2024-05-16 20:29:44.999562] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] in failed state. 00:21:40.648 [2024-05-16 20:29:44.999571] bdev_nvme.c:2884:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Reset is already in progress. Defer failover until reset completes. 00:21:40.648 [2024-05-16 20:29:44.999598] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] Ctrlr is in error state 00:21:40.648 [2024-05-16 20:29:44.999605] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] controller reinitialization failed 00:21:40.648 [2024-05-16 20:29:44.999612] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] already in failed state 00:21:40.648 [2024-05-16 20:29:44.999646] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:40.648 [2024-05-16 20:29:44.999680] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] resetting controller 00:21:40.648 [2024-05-16 20:29:46.002546] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:21:40.648 [2024-05-16 20:29:46.002580] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192e4280 00:21:40.648 [2024-05-16 20:29:46.002606] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:21:40.648 [2024-05-16 20:29:46.002615] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] in failed state. 00:21:40.648 [2024-05-16 20:29:46.002708] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] Ctrlr is in error state 00:21:40.648 [2024-05-16 20:29:46.002717] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] controller reinitialization failed 00:21:40.648 [2024-05-16 20:29:46.002725] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] already in failed state 00:21:40.648 [2024-05-16 20:29:46.002755] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:40.648 [2024-05-16 20:29:46.002764] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] resetting controller 00:21:40.648 [2024-05-16 20:29:47.258827] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:40.648 00:21:40.648 Latency(us) 00:21:40.648 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:40.648 Job: Nvme_mlx_0_0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:40.648 Verification LBA range: start 0x0 length 0x8000 00:21:40.648 Nvme_mlx_0_0n1 : 90.01 10707.03 41.82 0.00 0.00 11932.67 2044.10 11056984.26 00:21:40.648 Job: Nvme_mlx_0_1n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:40.648 Verification LBA range: start 0x0 length 0x8000 00:21:40.648 Nvme_mlx_0_1n1 : 90.01 9699.74 37.89 0.00 0.00 13176.60 2184.53 10098286.20 00:21:40.648 =================================================================================================================== 00:21:40.648 Total : 20406.77 79.71 0.00 0.00 12523.95 2044.10 11056984.26 00:21:40.648 Received shutdown signal, test time was about 90.000000 seconds 00:21:40.648 00:21:40.648 Latency(us) 00:21:40.648 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:40.648 =================================================================================================================== 00:21:40.648 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:40.648 20:30:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@123 -- # trap - SIGINT SIGTERM EXIT 00:21:40.648 20:30:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@124 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/try.txt 00:21:40.648 20:30:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@202 -- # killprocess 3100292 00:21:40.648 20:30:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@946 -- # '[' -z 3100292 ']' 00:21:40.648 20:30:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@950 -- # kill -0 3100292 00:21:40.648 20:30:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@951 -- # uname 00:21:40.648 20:30:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:21:40.648 20:30:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3100292 00:21:40.648 20:30:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:21:40.648 20:30:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:21:40.648 20:30:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3100292' 00:21:40.648 killing process with pid 3100292 00:21:40.648 20:30:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@965 -- # kill 3100292 00:21:40.648 [2024-05-16 20:30:53.255641] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:21:40.648 20:30:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@970 -- # wait 3100292 00:21:40.648 [2024-05-16 20:30:53.306311] rdma.c:2885:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:21:40.648 20:30:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@203 -- # nvmfpid= 00:21:40.648 20:30:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@205 -- # return 0 00:21:40.648 00:21:40.648 real 1m33.215s 00:21:40.648 user 4m28.086s 00:21:40.648 sys 0m3.993s 00:21:40.648 20:30:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@1122 -- # xtrace_disable 00:21:40.648 20:30:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@10 -- # set +x 00:21:40.648 ************************************ 00:21:40.648 END TEST nvmf_device_removal_pci_remove 00:21:40.648 ************************************ 00:21:40.648 20:30:53 nvmf_rdma.nvmf_device_removal -- target/device_removal.sh@317 -- # nvmftestfini 00:21:40.648 20:30:53 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:40.648 20:30:53 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@117 -- # sync 00:21:40.648 20:30:53 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:21:40.648 20:30:53 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:21:40.648 20:30:53 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@120 -- # set +e 00:21:40.648 20:30:53 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:40.648 20:30:53 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:21:40.648 rmmod nvme_rdma 00:21:40.648 rmmod nvme_fabrics 00:21:40.907 20:30:53 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:40.907 20:30:53 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@124 -- # set -e 00:21:40.907 20:30:53 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@125 -- # return 0 00:21:40.907 20:30:53 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:21:40.907 20:30:53 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:40.907 20:30:53 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:21:40.907 20:30:53 nvmf_rdma.nvmf_device_removal -- target/device_removal.sh@318 -- # clean_bond_device 00:21:40.907 20:30:53 nvmf_rdma.nvmf_device_removal -- target/device_removal.sh@240 -- # ip link 00:21:40.907 20:30:53 nvmf_rdma.nvmf_device_removal -- target/device_removal.sh@240 -- # grep bond_nvmf 00:21:40.907 00:21:40.907 real 3m12.908s 00:21:40.907 user 8m58.240s 00:21:40.907 sys 0m12.595s 00:21:40.907 20:30:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@1122 -- # xtrace_disable 00:21:40.907 20:30:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@10 -- # set +x 00:21:40.907 ************************************ 00:21:40.907 END TEST nvmf_device_removal 00:21:40.907 ************************************ 00:21:40.907 20:30:53 nvmf_rdma -- nvmf/nvmf.sh@80 -- # run_test nvmf_srq_overwhelm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/srq_overwhelm.sh --transport=rdma 00:21:40.907 20:30:53 nvmf_rdma -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:21:40.907 20:30:53 nvmf_rdma -- common/autotest_common.sh@1103 -- # xtrace_disable 00:21:40.907 20:30:53 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:21:40.907 ************************************ 00:21:40.907 START TEST nvmf_srq_overwhelm 00:21:40.907 ************************************ 00:21:40.907 20:30:53 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/srq_overwhelm.sh --transport=rdma 00:21:40.907 * Looking for test storage... 00:21:40.907 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:21:40.907 20:30:53 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:21:40.907 20:30:53 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@7 -- # uname -s 00:21:40.907 20:30:53 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:40.907 20:30:53 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:40.907 20:30:53 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:40.907 20:30:53 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:40.907 20:30:53 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:40.907 20:30:53 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:40.907 20:30:53 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:40.907 20:30:53 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:40.907 20:30:53 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:40.907 20:30:53 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:40.907 20:30:53 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:21:40.907 20:30:53 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:21:40.907 20:30:53 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:40.907 20:30:53 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:40.907 20:30:53 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:40.907 20:30:53 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:40.907 20:30:53 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:21:40.907 20:30:53 nvmf_rdma.nvmf_srq_overwhelm -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:40.907 20:30:53 nvmf_rdma.nvmf_srq_overwhelm -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:40.907 20:30:53 nvmf_rdma.nvmf_srq_overwhelm -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:40.907 20:30:53 nvmf_rdma.nvmf_srq_overwhelm -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:40.907 20:30:53 nvmf_rdma.nvmf_srq_overwhelm -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:40.907 20:30:53 nvmf_rdma.nvmf_srq_overwhelm -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:40.907 20:30:53 nvmf_rdma.nvmf_srq_overwhelm -- paths/export.sh@5 -- # export PATH 00:21:40.907 20:30:53 nvmf_rdma.nvmf_srq_overwhelm -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:40.907 20:30:53 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@47 -- # : 0 00:21:40.907 20:30:53 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:40.907 20:30:53 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:40.907 20:30:53 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:40.907 20:30:53 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:40.907 20:30:53 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:40.907 20:30:53 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:40.907 20:30:53 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:40.907 20:30:53 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:40.907 20:30:53 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:40.907 20:30:53 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:40.907 20:30:53 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@13 -- # NVME_CONNECT='nvme connect -i 16' 00:21:40.907 20:30:53 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@15 -- # nvmftestinit 00:21:40.907 20:30:53 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:21:40.907 20:30:53 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:40.907 20:30:53 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:40.907 20:30:53 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:40.907 20:30:53 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:40.907 20:30:53 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:40.907 20:30:53 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:40.907 20:30:53 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:40.907 20:30:53 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:40.907 20:30:53 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:40.907 20:30:53 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@285 -- # xtrace_disable 00:21:40.907 20:30:53 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:21:47.476 20:30:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:47.476 20:30:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@291 -- # pci_devs=() 00:21:47.476 20:30:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:47.476 20:30:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:47.476 20:30:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:47.476 20:30:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:47.476 20:30:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:47.476 20:30:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@295 -- # net_devs=() 00:21:47.476 20:30:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:47.476 20:30:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@296 -- # e810=() 00:21:47.476 20:30:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@296 -- # local -ga e810 00:21:47.476 20:30:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@297 -- # x722=() 00:21:47.476 20:30:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@297 -- # local -ga x722 00:21:47.476 20:30:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@298 -- # mlx=() 00:21:47.476 20:30:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@298 -- # local -ga mlx 00:21:47.476 20:30:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:47.476 20:30:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:47.476 20:30:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:47.476 20:30:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:47.476 20:30:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:47.476 20:30:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:47.476 20:30:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:47.476 20:30:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:47.476 20:30:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:47.476 20:30:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:47.476 20:30:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:47.476 20:30:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:47.476 20:30:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:21:47.476 20:30:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:21:47.476 20:30:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:21:47.476 20:30:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:21:47.476 20:30:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:21:47.476 20:30:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:47.476 20:30:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:47.476 20:30:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:21:47.476 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:21:47.476 20:30:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:21:47.476 20:30:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:21:47.476 20:30:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:21:47.476 20:30:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:21:47.476 20:30:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:21:47.476 20:30:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:21:47.476 20:30:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:47.476 20:30:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:21:47.476 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:21:47.476 20:30:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:21:47.476 20:30:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:21:47.476 20:30:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:21:47.476 20:30:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:21:47.476 20:30:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:21:47.476 20:30:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:21:47.476 20:30:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:47.476 20:30:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:21:47.476 20:30:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:47.476 20:30:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:47.476 20:30:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:21:47.476 20:30:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:47.476 20:30:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:47.476 20:30:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:21:47.476 Found net devices under 0000:da:00.0: mlx_0_0 00:21:47.476 20:30:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:47.476 20:30:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:47.476 20:30:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:47.476 20:30:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:21:47.476 20:30:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:47.476 20:30:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:47.476 20:30:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:21:47.476 Found net devices under 0000:da:00.1: mlx_0_1 00:21:47.476 20:30:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:47.476 20:30:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:47.476 20:30:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@414 -- # is_hw=yes 00:21:47.476 20:30:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:47.476 20:30:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:21:47.476 20:30:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:21:47.476 20:30:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@420 -- # rdma_device_init 00:21:47.476 20:30:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:21:47.476 20:30:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@58 -- # uname 00:21:47.476 20:30:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:21:47.476 20:30:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@62 -- # modprobe ib_cm 00:21:47.476 20:30:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@63 -- # modprobe ib_core 00:21:47.476 20:30:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@64 -- # modprobe ib_umad 00:21:47.476 20:30:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:21:47.476 20:30:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@66 -- # modprobe iw_cm 00:21:47.476 20:30:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:21:47.476 20:30:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:21:47.476 20:30:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@502 -- # allocate_nic_ips 00:21:47.476 20:30:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:21:47.476 20:30:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@73 -- # get_rdma_if_list 00:21:47.476 20:30:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:21:47.476 20:30:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:21:47.476 20:30:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:21:47.476 20:30:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:21:47.476 20:30:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:21:47.476 20:30:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:21:47.476 20:30:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:47.476 20:30:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:21:47.476 20:30:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@104 -- # echo mlx_0_0 00:21:47.476 20:30:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@105 -- # continue 2 00:21:47.476 20:30:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:21:47.476 20:30:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:47.476 20:30:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:21:47.476 20:30:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:47.476 20:30:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:21:47.476 20:30:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@104 -- # echo mlx_0_1 00:21:47.476 20:30:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@105 -- # continue 2 00:21:47.476 20:30:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:21:47.476 20:30:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:21:47.476 20:30:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:21:47.476 20:30:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:21:47.476 20:30:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@113 -- # awk '{print $4}' 00:21:47.476 20:30:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@113 -- # cut -d/ -f1 00:21:47.476 20:30:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:21:47.476 20:30:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:21:47.476 20:30:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:21:47.476 262: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:21:47.476 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:21:47.476 altname enp218s0f0np0 00:21:47.476 altname ens818f0np0 00:21:47.476 inet 192.168.100.8/24 scope global mlx_0_0 00:21:47.476 valid_lft forever preferred_lft forever 00:21:47.477 20:30:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:21:47.477 20:30:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:21:47.477 20:30:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:21:47.477 20:30:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:21:47.477 20:30:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@113 -- # cut -d/ -f1 00:21:47.477 20:30:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@113 -- # awk '{print $4}' 00:21:47.477 20:30:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:21:47.477 20:30:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:21:47.477 20:30:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:21:47.477 263: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:21:47.477 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:21:47.477 altname enp218s0f1np1 00:21:47.477 altname ens818f1np1 00:21:47.477 inet 192.168.100.9/24 scope global mlx_0_1 00:21:47.477 valid_lft forever preferred_lft forever 00:21:47.477 20:30:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@422 -- # return 0 00:21:47.477 20:30:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:47.477 20:30:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:21:47.477 20:30:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:21:47.477 20:30:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:21:47.477 20:30:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@86 -- # get_rdma_if_list 00:21:47.477 20:30:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:21:47.477 20:30:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:21:47.477 20:30:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:21:47.477 20:30:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:21:47.477 20:30:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:21:47.477 20:30:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:21:47.477 20:30:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:47.477 20:30:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:21:47.477 20:30:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@104 -- # echo mlx_0_0 00:21:47.477 20:30:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@105 -- # continue 2 00:21:47.477 20:30:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:21:47.477 20:30:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:47.477 20:30:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:21:47.477 20:30:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:47.477 20:30:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:21:47.477 20:30:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@104 -- # echo mlx_0_1 00:21:47.477 20:30:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@105 -- # continue 2 00:21:47.477 20:30:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:21:47.477 20:30:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:21:47.477 20:30:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:21:47.477 20:30:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:21:47.477 20:30:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@113 -- # awk '{print $4}' 00:21:47.477 20:30:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@113 -- # cut -d/ -f1 00:21:47.477 20:30:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:21:47.477 20:30:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:21:47.477 20:30:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:21:47.477 20:30:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:21:47.477 20:30:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@113 -- # cut -d/ -f1 00:21:47.477 20:30:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@113 -- # awk '{print $4}' 00:21:47.477 20:30:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:21:47.477 192.168.100.9' 00:21:47.477 20:30:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:21:47.477 192.168.100.9' 00:21:47.477 20:30:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@457 -- # head -n 1 00:21:47.477 20:30:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:21:47.477 20:30:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:21:47.477 192.168.100.9' 00:21:47.477 20:30:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@458 -- # tail -n +2 00:21:47.477 20:30:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@458 -- # head -n 1 00:21:47.477 20:30:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:21:47.477 20:30:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:21:47.477 20:30:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:21:47.477 20:30:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:21:47.477 20:30:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:21:47.477 20:30:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:21:47.477 20:30:59 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@17 -- # nvmfappstart -m 0xF 00:21:47.477 20:30:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:47.477 20:30:59 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@720 -- # xtrace_disable 00:21:47.477 20:30:59 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:21:47.477 20:30:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@481 -- # nvmfpid=3119477 00:21:47.477 20:30:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@482 -- # waitforlisten 3119477 00:21:47.477 20:30:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:47.477 20:30:59 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@827 -- # '[' -z 3119477 ']' 00:21:47.477 20:30:59 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:47.477 20:30:59 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@832 -- # local max_retries=100 00:21:47.477 20:30:59 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:47.477 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:47.477 20:30:59 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@836 -- # xtrace_disable 00:21:47.477 20:30:59 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:21:47.477 [2024-05-16 20:30:59.447902] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:21:47.477 [2024-05-16 20:30:59.447955] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:47.477 EAL: No free 2048 kB hugepages reported on node 1 00:21:47.477 [2024-05-16 20:30:59.509580] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:47.477 [2024-05-16 20:30:59.589902] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:47.477 [2024-05-16 20:30:59.589942] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:47.477 [2024-05-16 20:30:59.589949] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:47.477 [2024-05-16 20:30:59.589955] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:47.477 [2024-05-16 20:30:59.589960] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:47.477 [2024-05-16 20:30:59.589998] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:47.477 [2024-05-16 20:30:59.590097] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:47.477 [2024-05-16 20:30:59.590349] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:47.477 [2024-05-16 20:30:59.590350] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:47.477 20:31:00 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:21:47.477 20:31:00 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@860 -- # return 0 00:21:47.477 20:31:00 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:47.477 20:31:00 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:47.477 20:31:00 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:21:47.477 20:31:00 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:47.477 20:31:00 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@20 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -s 1024 00:21:47.477 20:31:00 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:47.477 20:31:00 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:21:47.477 [2024-05-16 20:31:00.332011] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x17af9b0/0x17b3ea0) succeed. 00:21:47.477 [2024-05-16 20:31:00.342241] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x17b0ff0/0x17f5530) succeed. 00:21:47.477 20:31:00 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:47.477 20:31:00 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # seq 0 5 00:21:47.477 20:31:00 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:21:47.477 20:31:00 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK00000000000000 00:21:47.477 20:31:00 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:47.477 20:31:00 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:21:47.477 20:31:00 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:47.477 20:31:00 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:47.477 20:31:00 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:47.477 20:31:00 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:21:47.477 Malloc0 00:21:47.477 20:31:00 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:47.477 20:31:00 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Malloc0 00:21:47.477 20:31:00 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:47.477 20:31:00 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:21:47.477 20:31:00 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:47.477 20:31:00 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:21:47.477 20:31:00 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:47.478 20:31:00 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:21:47.478 [2024-05-16 20:31:00.438005] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:21:47.478 [2024-05-16 20:31:00.438407] rdma.c:3032:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:21:47.478 20:31:00 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:47.478 20:31:00 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode0 -a 192.168.100.8 -s 4420 00:21:48.854 20:31:01 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme0n1 00:21:48.854 20:31:01 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # local i=0 00:21:48.854 20:31:01 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1232 -- # grep -q -w nvme0n1 00:21:48.854 20:31:01 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1232 -- # lsblk -l -o NAME 00:21:48.854 20:31:01 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:21:48.854 20:31:01 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n1 00:21:48.854 20:31:01 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1242 -- # return 0 00:21:48.854 20:31:01 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:21:48.854 20:31:01 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:48.854 20:31:01 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:48.854 20:31:01 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:21:48.854 20:31:01 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:48.854 20:31:01 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:21:48.854 20:31:01 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:48.854 20:31:01 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:21:48.854 Malloc1 00:21:48.854 20:31:01 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:48.854 20:31:01 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:48.854 20:31:01 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:48.854 20:31:01 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:21:48.854 20:31:01 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:48.854 20:31:01 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:21:48.854 20:31:01 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:48.854 20:31:01 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:21:48.854 20:31:01 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:48.854 20:31:01 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:21:49.790 20:31:02 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme1n1 00:21:49.790 20:31:02 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # local i=0 00:21:49.790 20:31:02 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1232 -- # lsblk -l -o NAME 00:21:49.790 20:31:02 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1232 -- # grep -q -w nvme1n1 00:21:49.790 20:31:02 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1238 -- # grep -q -w nvme1n1 00:21:49.790 20:31:02 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:21:49.790 20:31:02 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1242 -- # return 0 00:21:49.790 20:31:02 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:21:49.790 20:31:02 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:21:49.790 20:31:02 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:49.790 20:31:02 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:21:49.790 20:31:02 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:49.790 20:31:02 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:21:49.790 20:31:02 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:49.790 20:31:02 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:21:49.790 Malloc2 00:21:49.790 20:31:02 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:49.790 20:31:02 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:21:49.790 20:31:02 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:49.790 20:31:02 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:21:49.790 20:31:02 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:49.790 20:31:02 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420 00:21:49.790 20:31:02 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:49.790 20:31:02 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:21:49.790 20:31:02 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:49.790 20:31:02 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode2 -a 192.168.100.8 -s 4420 00:21:50.726 20:31:03 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme2n1 00:21:50.726 20:31:03 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # local i=0 00:21:50.726 20:31:03 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1232 -- # lsblk -l -o NAME 00:21:50.726 20:31:03 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1232 -- # grep -q -w nvme2n1 00:21:50.726 20:31:03 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:21:50.726 20:31:03 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1238 -- # grep -q -w nvme2n1 00:21:50.726 20:31:03 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1242 -- # return 0 00:21:50.726 20:31:03 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:21:50.726 20:31:03 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:21:50.726 20:31:03 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:50.726 20:31:03 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:21:50.726 20:31:03 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:50.726 20:31:03 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:21:50.726 20:31:03 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:50.726 20:31:03 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:21:50.726 Malloc3 00:21:50.726 20:31:03 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:50.726 20:31:03 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:21:50.726 20:31:03 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:50.727 20:31:03 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:21:50.727 20:31:03 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:50.727 20:31:03 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t rdma -a 192.168.100.8 -s 4420 00:21:50.727 20:31:03 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:50.727 20:31:03 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:21:50.727 20:31:03 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:50.727 20:31:03 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode3 -a 192.168.100.8 -s 4420 00:21:51.661 20:31:04 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme3n1 00:21:51.661 20:31:04 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # local i=0 00:21:51.661 20:31:04 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1232 -- # lsblk -l -o NAME 00:21:51.661 20:31:04 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1232 -- # grep -q -w nvme3n1 00:21:51.661 20:31:04 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1238 -- # grep -q -w nvme3n1 00:21:51.661 20:31:04 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:21:51.661 20:31:04 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1242 -- # return 0 00:21:51.661 20:31:04 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:21:51.662 20:31:04 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:21:51.662 20:31:04 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:51.662 20:31:04 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:21:51.662 20:31:04 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:51.662 20:31:04 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:21:51.662 20:31:04 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:51.662 20:31:04 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:21:51.662 Malloc4 00:21:51.662 20:31:04 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:51.662 20:31:04 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:21:51.662 20:31:04 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:51.662 20:31:04 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:21:51.662 20:31:04 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:51.662 20:31:04 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t rdma -a 192.168.100.8 -s 4420 00:21:51.662 20:31:04 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:51.662 20:31:04 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:21:51.662 20:31:04 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:51.662 20:31:04 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode4 -a 192.168.100.8 -s 4420 00:21:53.038 20:31:05 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme4n1 00:21:53.038 20:31:05 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # local i=0 00:21:53.038 20:31:05 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1232 -- # lsblk -l -o NAME 00:21:53.038 20:31:05 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1232 -- # grep -q -w nvme4n1 00:21:53.039 20:31:05 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1238 -- # grep -q -w nvme4n1 00:21:53.039 20:31:05 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:21:53.039 20:31:05 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1242 -- # return 0 00:21:53.039 20:31:05 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:21:53.039 20:31:05 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK00000000000005 00:21:53.039 20:31:05 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:53.039 20:31:05 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:21:53.039 20:31:05 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:53.039 20:31:05 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:21:53.039 20:31:05 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:53.039 20:31:05 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:21:53.039 Malloc5 00:21:53.039 20:31:05 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:53.039 20:31:05 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:21:53.039 20:31:05 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:53.039 20:31:05 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:21:53.039 20:31:05 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:53.039 20:31:05 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t rdma -a 192.168.100.8 -s 4420 00:21:53.039 20:31:05 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:53.039 20:31:05 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:21:53.039 20:31:05 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:53.039 20:31:05 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode5 -a 192.168.100.8 -s 4420 00:21:54.001 20:31:06 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme5n1 00:21:54.001 20:31:06 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # local i=0 00:21:54.001 20:31:06 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1232 -- # lsblk -l -o NAME 00:21:54.001 20:31:06 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1232 -- # grep -q -w nvme5n1 00:21:54.002 20:31:06 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1238 -- # grep -q -w nvme5n1 00:21:54.002 20:31:06 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:21:54.002 20:31:06 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1242 -- # return 0 00:21:54.002 20:31:06 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 1048576 -d 128 -t read -r 10 -n 13 00:21:54.002 [global] 00:21:54.002 thread=1 00:21:54.002 invalidate=1 00:21:54.002 rw=read 00:21:54.002 time_based=1 00:21:54.002 runtime=10 00:21:54.002 ioengine=libaio 00:21:54.002 direct=1 00:21:54.002 bs=1048576 00:21:54.002 iodepth=128 00:21:54.002 norandommap=1 00:21:54.002 numjobs=13 00:21:54.002 00:21:54.002 [job0] 00:21:54.002 filename=/dev/nvme0n1 00:21:54.002 [job1] 00:21:54.002 filename=/dev/nvme1n1 00:21:54.002 [job2] 00:21:54.002 filename=/dev/nvme2n1 00:21:54.002 [job3] 00:21:54.002 filename=/dev/nvme3n1 00:21:54.002 [job4] 00:21:54.002 filename=/dev/nvme4n1 00:21:54.002 [job5] 00:21:54.002 filename=/dev/nvme5n1 00:21:54.002 Could not set queue depth (nvme0n1) 00:21:54.002 Could not set queue depth (nvme1n1) 00:21:54.002 Could not set queue depth (nvme2n1) 00:21:54.002 Could not set queue depth (nvme3n1) 00:21:54.002 Could not set queue depth (nvme4n1) 00:21:54.002 Could not set queue depth (nvme5n1) 00:21:54.264 job0: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:21:54.264 ... 00:21:54.264 job1: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:21:54.264 ... 00:21:54.264 job2: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:21:54.264 ... 00:21:54.264 job3: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:21:54.264 ... 00:21:54.264 job4: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:21:54.264 ... 00:21:54.264 job5: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:21:54.264 ... 00:21:54.264 fio-3.35 00:21:54.264 Starting 78 threads 00:22:09.155 00:22:09.155 job0: (groupid=0, jobs=1): err= 0: pid=3120988: Thu May 16 20:31:21 2024 00:22:09.155 read: IOPS=15, BW=15.0MiB/s (15.8MB/s)(214MiB/14235msec) 00:22:09.155 slat (usec): min=576, max=2179.0k, avg=46728.43, stdev=262966.49 00:22:09.155 clat (msec): min=936, max=13965, avg=7120.28, stdev=3678.07 00:22:09.155 lat (msec): min=939, max=13965, avg=7167.00, stdev=3696.74 00:22:09.155 clat percentiles (msec): 00:22:09.155 | 1.00th=[ 936], 5.00th=[ 944], 10.00th=[ 978], 20.00th=[ 4212], 00:22:09.155 | 30.00th=[ 4279], 40.00th=[ 4329], 50.00th=[ 9463], 60.00th=[ 9731], 00:22:09.155 | 70.00th=[ 9866], 80.00th=[10134], 90.00th=[10268], 95.00th=[10671], 00:22:09.155 | 99.00th=[12953], 99.50th=[12953], 99.90th=[14026], 99.95th=[14026], 00:22:09.155 | 99.99th=[14026] 00:22:09.155 bw ( KiB/s): min= 2048, max=88064, per=1.42%, avg=35635.20, stdev=42532.38, samples=5 00:22:09.155 iops : min= 2, max= 86, avg=34.80, stdev=41.54, samples=5 00:22:09.155 lat (msec) : 1000=14.49%, 2000=1.87%, >=2000=83.64% 00:22:09.155 cpu : usr=0.01%, sys=0.62%, ctx=389, majf=0, minf=32769 00:22:09.155 IO depths : 1=0.5%, 2=0.9%, 4=1.9%, 8=3.7%, 16=7.5%, 32=15.0%, >=64=70.6% 00:22:09.155 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:09.155 complete : 0=0.0%, 4=98.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=1.1% 00:22:09.155 issued rwts: total=214,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:09.155 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:09.155 job0: (groupid=0, jobs=1): err= 0: pid=3120989: Thu May 16 20:31:21 2024 00:22:09.155 read: IOPS=2, BW=2390KiB/s (2448kB/s)(33.0MiB/14137msec) 00:22:09.155 slat (usec): min=726, max=2153.5k, avg=363367.56, stdev=768421.04 00:22:09.155 clat (msec): min=2145, max=14134, avg=11282.76, stdev=3654.34 00:22:09.155 lat (msec): min=4201, max=14136, avg=11646.13, stdev=3295.95 00:22:09.155 clat percentiles (msec): 00:22:09.155 | 1.00th=[ 2140], 5.00th=[ 4212], 10.00th=[ 6409], 20.00th=[ 6477], 00:22:09.155 | 30.00th=[10671], 40.00th=[12818], 50.00th=[12818], 60.00th=[14160], 00:22:09.155 | 70.00th=[14160], 80.00th=[14160], 90.00th=[14160], 95.00th=[14160], 00:22:09.155 | 99.00th=[14160], 99.50th=[14160], 99.90th=[14160], 99.95th=[14160], 00:22:09.155 | 99.99th=[14160] 00:22:09.155 lat (msec) : >=2000=100.00% 00:22:09.155 cpu : usr=0.00%, sys=0.15%, ctx=55, majf=0, minf=8449 00:22:09.155 IO depths : 1=3.0%, 2=6.1%, 4=12.1%, 8=24.2%, 16=48.5%, 32=6.1%, >=64=0.0% 00:22:09.155 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:09.155 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:22:09.155 issued rwts: total=33,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:09.155 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:09.155 job0: (groupid=0, jobs=1): err= 0: pid=3120990: Thu May 16 20:31:21 2024 00:22:09.155 read: IOPS=9, BW=9688KiB/s (9920kB/s)(134MiB/14164msec) 00:22:09.155 slat (usec): min=589, max=2192.6k, avg=89589.13, stdev=383454.30 00:22:09.155 clat (msec): min=2158, max=13987, avg=12526.61, stdev=2161.51 00:22:09.155 lat (msec): min=4221, max=14027, avg=12616.20, stdev=1965.92 00:22:09.155 clat percentiles (msec): 00:22:09.155 | 1.00th=[ 4212], 5.00th=[ 6477], 10.00th=[ 9866], 20.00th=[12550], 00:22:09.155 | 30.00th=[12818], 40.00th=[12953], 50.00th=[13221], 60.00th=[13355], 00:22:09.155 | 70.00th=[13489], 80.00th=[13624], 90.00th=[13758], 95.00th=[13892], 00:22:09.155 | 99.00th=[14026], 99.50th=[14026], 99.90th=[14026], 99.95th=[14026], 00:22:09.155 | 99.99th=[14026] 00:22:09.155 bw ( KiB/s): min= 2048, max= 4096, per=0.14%, avg=3584.00, stdev=1024.00, samples=4 00:22:09.155 iops : min= 2, max= 4, avg= 3.50, stdev= 1.00, samples=4 00:22:09.155 lat (msec) : >=2000=100.00% 00:22:09.155 cpu : usr=0.01%, sys=0.67%, ctx=305, majf=0, minf=32769 00:22:09.155 IO depths : 1=0.7%, 2=1.5%, 4=3.0%, 8=6.0%, 16=11.9%, 32=23.9%, >=64=53.0% 00:22:09.155 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:09.155 complete : 0=0.0%, 4=87.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=12.5% 00:22:09.155 issued rwts: total=134,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:09.155 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:09.156 job0: (groupid=0, jobs=1): err= 0: pid=3120991: Thu May 16 20:31:21 2024 00:22:09.156 read: IOPS=1, BW=1090KiB/s (1116kB/s)(15.0MiB/14092msec) 00:22:09.156 slat (msec): min=21, max=2157, avg=795.89, stdev=990.24 00:22:09.156 clat (msec): min=2153, max=12879, avg=8703.37, stdev=3293.74 00:22:09.156 lat (msec): min=4215, max=14091, avg=9499.25, stdev=3029.67 00:22:09.156 clat percentiles (msec): 00:22:09.156 | 1.00th=[ 2165], 5.00th=[ 2165], 10.00th=[ 4212], 20.00th=[ 6342], 00:22:09.156 | 30.00th=[ 6409], 40.00th=[ 6477], 50.00th=[ 8658], 60.00th=[10671], 00:22:09.156 | 70.00th=[10671], 80.00th=[10805], 90.00th=[12818], 95.00th=[12818], 00:22:09.156 | 99.00th=[12818], 99.50th=[12818], 99.90th=[12818], 99.95th=[12818], 00:22:09.156 | 99.99th=[12818] 00:22:09.156 lat (msec) : >=2000=100.00% 00:22:09.156 cpu : usr=0.00%, sys=0.08%, ctx=42, majf=0, minf=3841 00:22:09.156 IO depths : 1=6.7%, 2=13.3%, 4=26.7%, 8=53.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:09.156 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:09.156 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=100.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:09.156 issued rwts: total=15,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:09.156 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:09.156 job0: (groupid=0, jobs=1): err= 0: pid=3120992: Thu May 16 20:31:21 2024 00:22:09.156 read: IOPS=1, BW=1890KiB/s (1935kB/s)(26.0MiB/14087msec) 00:22:09.156 slat (usec): min=728, max=2141.7k, avg=458611.17, stdev=833635.05 00:22:09.156 clat (msec): min=2162, max=14085, avg=10245.66, stdev=3586.16 00:22:09.156 lat (msec): min=4228, max=14086, avg=10704.27, stdev=3258.62 00:22:09.156 clat percentiles (msec): 00:22:09.156 | 1.00th=[ 2165], 5.00th=[ 4245], 10.00th=[ 6342], 20.00th=[ 6477], 00:22:09.156 | 30.00th=[ 8557], 40.00th=[ 8658], 50.00th=[10671], 60.00th=[12818], 00:22:09.156 | 70.00th=[13892], 80.00th=[14026], 90.00th=[14026], 95.00th=[14026], 00:22:09.156 | 99.00th=[14026], 99.50th=[14026], 99.90th=[14026], 99.95th=[14026], 00:22:09.156 | 99.99th=[14026] 00:22:09.156 lat (msec) : >=2000=100.00% 00:22:09.156 cpu : usr=0.00%, sys=0.14%, ctx=57, majf=0, minf=6657 00:22:09.156 IO depths : 1=3.8%, 2=7.7%, 4=15.4%, 8=30.8%, 16=42.3%, 32=0.0%, >=64=0.0% 00:22:09.156 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:09.156 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:22:09.156 issued rwts: total=26,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:09.156 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:09.156 job0: (groupid=0, jobs=1): err= 0: pid=3120993: Thu May 16 20:31:21 2024 00:22:09.156 read: IOPS=48, BW=48.3MiB/s (50.6MB/s)(680MiB/14079msec) 00:22:09.156 slat (usec): min=52, max=2114.0k, avg=17575.86, stdev=137838.17 00:22:09.156 clat (msec): min=471, max=9218, avg=2456.64, stdev=3081.02 00:22:09.156 lat (msec): min=473, max=9222, avg=2474.22, stdev=3088.62 00:22:09.156 clat percentiles (msec): 00:22:09.156 | 1.00th=[ 477], 5.00th=[ 550], 10.00th=[ 684], 20.00th=[ 810], 00:22:09.156 | 30.00th=[ 877], 40.00th=[ 944], 50.00th=[ 1020], 60.00th=[ 1133], 00:22:09.156 | 70.00th=[ 1217], 80.00th=[ 1653], 90.00th=[ 8926], 95.00th=[ 9060], 00:22:09.156 | 99.00th=[ 9194], 99.50th=[ 9194], 99.90th=[ 9194], 99.95th=[ 9194], 00:22:09.156 | 99.99th=[ 9194] 00:22:09.156 bw ( KiB/s): min= 2048, max=204800, per=3.75%, avg=94338.75, stdev=73587.54, samples=12 00:22:09.156 iops : min= 2, max= 200, avg=91.92, stdev=71.90, samples=12 00:22:09.156 lat (msec) : 500=2.79%, 750=9.26%, 1000=37.06%, 2000=31.03%, >=2000=19.85% 00:22:09.156 cpu : usr=0.04%, sys=0.98%, ctx=1092, majf=0, minf=32769 00:22:09.156 IO depths : 1=0.1%, 2=0.3%, 4=0.6%, 8=1.2%, 16=2.4%, 32=4.7%, >=64=90.7% 00:22:09.156 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:09.156 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:22:09.156 issued rwts: total=680,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:09.156 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:09.156 job0: (groupid=0, jobs=1): err= 0: pid=3120994: Thu May 16 20:31:21 2024 00:22:09.156 read: IOPS=2, BW=2906KiB/s (2976kB/s)(40.0MiB/14095msec) 00:22:09.156 slat (usec): min=716, max=2140.1k, avg=298461.39, stdev=703210.30 00:22:09.156 clat (msec): min=2155, max=14093, avg=11372.33, stdev=3451.18 00:22:09.156 lat (msec): min=4213, max=14094, avg=11670.79, stdev=3135.46 00:22:09.156 clat percentiles (msec): 00:22:09.156 | 1.00th=[ 2165], 5.00th=[ 4212], 10.00th=[ 6409], 20.00th=[ 8490], 00:22:09.156 | 30.00th=[10671], 40.00th=[10805], 50.00th=[12818], 60.00th=[13892], 00:22:09.156 | 70.00th=[13892], 80.00th=[14026], 90.00th=[14026], 95.00th=[14026], 00:22:09.156 | 99.00th=[14160], 99.50th=[14160], 99.90th=[14160], 99.95th=[14160], 00:22:09.156 | 99.99th=[14160] 00:22:09.156 lat (msec) : >=2000=100.00% 00:22:09.156 cpu : usr=0.00%, sys=0.22%, ctx=61, majf=0, minf=10241 00:22:09.156 IO depths : 1=2.5%, 2=5.0%, 4=10.0%, 8=20.0%, 16=40.0%, 32=22.5%, >=64=0.0% 00:22:09.156 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:09.156 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:22:09.156 issued rwts: total=40,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:09.156 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:09.156 job0: (groupid=0, jobs=1): err= 0: pid=3120996: Thu May 16 20:31:21 2024 00:22:09.156 read: IOPS=25, BW=25.8MiB/s (27.0MB/s)(365MiB/14150msec) 00:22:09.156 slat (usec): min=632, max=2144.5k, avg=32961.54, stdev=216714.91 00:22:09.156 clat (msec): min=950, max=11782, avg=4695.68, stdev=4612.46 00:22:09.156 lat (msec): min=953, max=11787, avg=4728.65, stdev=4620.74 00:22:09.156 clat percentiles (msec): 00:22:09.156 | 1.00th=[ 953], 5.00th=[ 986], 10.00th=[ 1011], 20.00th=[ 1062], 00:22:09.156 | 30.00th=[ 1116], 40.00th=[ 1318], 50.00th=[ 1552], 60.00th=[ 1670], 00:22:09.156 | 70.00th=[10805], 80.00th=[11073], 90.00th=[11476], 95.00th=[11610], 00:22:09.156 | 99.00th=[11745], 99.50th=[11745], 99.90th=[11745], 99.95th=[11745], 00:22:09.156 | 99.99th=[11745] 00:22:09.156 bw ( KiB/s): min= 1961, max=143360, per=2.42%, avg=60917.12, stdev=59728.73, samples=8 00:22:09.156 iops : min= 1, max= 140, avg=59.37, stdev=58.46, samples=8 00:22:09.156 lat (msec) : 1000=9.04%, 2000=52.60%, >=2000=38.36% 00:22:09.156 cpu : usr=0.00%, sys=0.80%, ctx=855, majf=0, minf=32769 00:22:09.156 IO depths : 1=0.3%, 2=0.5%, 4=1.1%, 8=2.2%, 16=4.4%, 32=8.8%, >=64=82.7% 00:22:09.156 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:09.156 complete : 0=0.0%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.4% 00:22:09.156 issued rwts: total=365,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:09.156 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:09.156 job0: (groupid=0, jobs=1): err= 0: pid=3120997: Thu May 16 20:31:21 2024 00:22:09.156 read: IOPS=24, BW=25.0MiB/s (26.2MB/s)(354MiB/14161msec) 00:22:09.156 slat (usec): min=635, max=2099.2k, avg=33907.85, stdev=219285.69 00:22:09.156 clat (msec): min=772, max=11856, avg=4789.63, stdev=4798.11 00:22:09.156 lat (msec): min=785, max=11858, avg=4823.54, stdev=4805.95 00:22:09.156 clat percentiles (msec): 00:22:09.156 | 1.00th=[ 785], 5.00th=[ 785], 10.00th=[ 835], 20.00th=[ 986], 00:22:09.156 | 30.00th=[ 1150], 40.00th=[ 1200], 50.00th=[ 1267], 60.00th=[ 2165], 00:22:09.156 | 70.00th=[11208], 80.00th=[11476], 90.00th=[11610], 95.00th=[11745], 00:22:09.156 | 99.00th=[11879], 99.50th=[11879], 99.90th=[11879], 99.95th=[11879], 00:22:09.156 | 99.99th=[11879] 00:22:09.156 bw ( KiB/s): min= 2048, max=165888, per=2.05%, avg=51643.78, stdev=67845.59, samples=9 00:22:09.156 iops : min= 2, max= 162, avg=50.33, stdev=66.32, samples=9 00:22:09.156 lat (msec) : 1000=20.62%, 2000=39.27%, >=2000=40.11% 00:22:09.156 cpu : usr=0.01%, sys=0.80%, ctx=816, majf=0, minf=32769 00:22:09.156 IO depths : 1=0.3%, 2=0.6%, 4=1.1%, 8=2.3%, 16=4.5%, 32=9.0%, >=64=82.2% 00:22:09.156 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:09.156 complete : 0=0.0%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.4% 00:22:09.156 issued rwts: total=354,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:09.156 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:09.156 job0: (groupid=0, jobs=1): err= 0: pid=3120998: Thu May 16 20:31:21 2024 00:22:09.156 read: IOPS=33, BW=33.5MiB/s (35.1MB/s)(407MiB/12149msec) 00:22:09.156 slat (usec): min=564, max=2146.6k, avg=24590.92, stdev=180701.61 00:22:09.156 clat (msec): min=892, max=9641, avg=3628.46, stdev=3642.13 00:22:09.156 lat (msec): min=901, max=9650, avg=3653.05, stdev=3649.56 00:22:09.156 clat percentiles (msec): 00:22:09.156 | 1.00th=[ 911], 5.00th=[ 936], 10.00th=[ 961], 20.00th=[ 1020], 00:22:09.156 | 30.00th=[ 1167], 40.00th=[ 1183], 50.00th=[ 1250], 60.00th=[ 1318], 00:22:09.156 | 70.00th=[ 6477], 80.00th=[ 8926], 90.00th=[ 9329], 95.00th=[ 9463], 00:22:09.156 | 99.00th=[ 9597], 99.50th=[ 9597], 99.90th=[ 9597], 99.95th=[ 9597], 00:22:09.156 | 99.99th=[ 9597] 00:22:09.156 bw ( KiB/s): min= 2019, max=143360, per=2.53%, avg=63685.00, stdev=58317.74, samples=9 00:22:09.156 iops : min= 1, max= 140, avg=62.00, stdev=56.98, samples=9 00:22:09.156 lat (msec) : 1000=17.94%, 2000=48.16%, >=2000=33.91% 00:22:09.156 cpu : usr=0.03%, sys=1.04%, ctx=972, majf=0, minf=32079 00:22:09.156 IO depths : 1=0.2%, 2=0.5%, 4=1.0%, 8=2.0%, 16=3.9%, 32=7.9%, >=64=84.5% 00:22:09.156 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:09.156 complete : 0=0.0%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.4% 00:22:09.156 issued rwts: total=407,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:09.156 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:09.156 job0: (groupid=0, jobs=1): err= 0: pid=3120999: Thu May 16 20:31:21 2024 00:22:09.156 read: IOPS=77, BW=77.2MiB/s (81.0MB/s)(1096MiB/14194msec) 00:22:09.156 slat (usec): min=41, max=2160.9k, avg=10985.10, stdev=131364.68 00:22:09.156 clat (msec): min=217, max=13955, avg=1331.69, stdev=2812.97 00:22:09.156 lat (msec): min=218, max=13956, avg=1342.67, stdev=2832.99 00:22:09.156 clat percentiles (msec): 00:22:09.156 | 1.00th=[ 220], 5.00th=[ 220], 10.00th=[ 222], 20.00th=[ 224], 00:22:09.156 | 30.00th=[ 226], 40.00th=[ 228], 50.00th=[ 230], 60.00th=[ 232], 00:22:09.156 | 70.00th=[ 234], 80.00th=[ 239], 90.00th=[ 8658], 95.00th=[ 8792], 00:22:09.156 | 99.00th=[10671], 99.50th=[12818], 99.90th=[12953], 99.95th=[13892], 00:22:09.156 | 99.99th=[13892] 00:22:09.156 bw ( KiB/s): min= 1961, max=524288, per=9.85%, avg=247918.12, stdev=253500.04, samples=8 00:22:09.156 iops : min= 1, max= 512, avg=241.75, stdev=247.67, samples=8 00:22:09.156 lat (msec) : 250=83.67%, 500=1.37%, >=2000=14.96% 00:22:09.156 cpu : usr=0.02%, sys=1.06%, ctx=1014, majf=0, minf=32769 00:22:09.156 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.7%, 16=1.5%, 32=2.9%, >=64=94.3% 00:22:09.156 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:09.156 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:09.156 issued rwts: total=1096,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:09.156 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:09.156 job0: (groupid=0, jobs=1): err= 0: pid=3121000: Thu May 16 20:31:21 2024 00:22:09.156 read: IOPS=1, BW=1741KiB/s (1783kB/s)(24.0MiB/14112msec) 00:22:09.156 slat (msec): min=15, max=2083, avg=497.95, stdev=849.94 00:22:09.157 clat (msec): min=2160, max=14095, avg=8803.10, stdev=3587.55 00:22:09.157 lat (msec): min=4217, max=14110, avg=9301.04, stdev=3452.25 00:22:09.157 clat percentiles (msec): 00:22:09.157 | 1.00th=[ 2165], 5.00th=[ 4212], 10.00th=[ 4245], 20.00th=[ 4329], 00:22:09.157 | 30.00th=[ 6409], 40.00th=[ 8490], 50.00th=[ 8557], 60.00th=[10671], 00:22:09.157 | 70.00th=[10671], 80.00th=[12818], 90.00th=[14026], 95.00th=[14026], 00:22:09.157 | 99.00th=[14160], 99.50th=[14160], 99.90th=[14160], 99.95th=[14160], 00:22:09.157 | 99.99th=[14160] 00:22:09.157 lat (msec) : >=2000=100.00% 00:22:09.157 cpu : usr=0.00%, sys=0.13%, ctx=67, majf=0, minf=6145 00:22:09.157 IO depths : 1=4.2%, 2=8.3%, 4=16.7%, 8=33.3%, 16=37.5%, 32=0.0%, >=64=0.0% 00:22:09.157 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:09.157 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:22:09.157 issued rwts: total=24,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:09.157 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:09.157 job0: (groupid=0, jobs=1): err= 0: pid=3121001: Thu May 16 20:31:21 2024 00:22:09.157 read: IOPS=3, BW=3174KiB/s (3250kB/s)(44.0MiB/14197msec) 00:22:09.157 slat (usec): min=753, max=4250.3k, avg=273744.75, stdev=825843.37 00:22:09.157 clat (msec): min=2151, max=14194, avg=12761.71, stdev=2943.20 00:22:09.157 lat (msec): min=4221, max=14196, avg=13035.45, stdev=2452.65 00:22:09.157 clat percentiles (msec): 00:22:09.157 | 1.00th=[ 2165], 5.00th=[ 6409], 10.00th=[ 6409], 20.00th=[12818], 00:22:09.157 | 30.00th=[12953], 40.00th=[14026], 50.00th=[14160], 60.00th=[14160], 00:22:09.157 | 70.00th=[14160], 80.00th=[14160], 90.00th=[14160], 95.00th=[14160], 00:22:09.157 | 99.00th=[14160], 99.50th=[14160], 99.90th=[14160], 99.95th=[14160], 00:22:09.157 | 99.99th=[14160] 00:22:09.157 lat (msec) : >=2000=100.00% 00:22:09.157 cpu : usr=0.00%, sys=0.25%, ctx=80, majf=0, minf=11265 00:22:09.157 IO depths : 1=2.3%, 2=4.5%, 4=9.1%, 8=18.2%, 16=36.4%, 32=29.5%, >=64=0.0% 00:22:09.157 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:09.157 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:22:09.157 issued rwts: total=44,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:09.157 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:09.157 job1: (groupid=0, jobs=1): err= 0: pid=3121010: Thu May 16 20:31:21 2024 00:22:09.157 read: IOPS=7, BW=7675KiB/s (7859kB/s)(106MiB/14143msec) 00:22:09.157 slat (usec): min=544, max=2079.4k, avg=113438.38, stdev=440727.41 00:22:09.157 clat (msec): min=2117, max=14141, avg=12376.06, stdev=2859.29 00:22:09.157 lat (msec): min=4154, max=14142, avg=12489.49, stdev=2681.41 00:22:09.157 clat percentiles (msec): 00:22:09.157 | 1.00th=[ 4144], 5.00th=[ 4245], 10.00th=[ 8490], 20.00th=[10671], 00:22:09.157 | 30.00th=[13624], 40.00th=[13624], 50.00th=[13758], 60.00th=[13758], 00:22:09.157 | 70.00th=[13892], 80.00th=[13892], 90.00th=[14160], 95.00th=[14160], 00:22:09.157 | 99.00th=[14160], 99.50th=[14160], 99.90th=[14160], 99.95th=[14160], 00:22:09.157 | 99.99th=[14160] 00:22:09.157 lat (msec) : >=2000=100.00% 00:22:09.157 cpu : usr=0.00%, sys=0.47%, ctx=255, majf=0, minf=27137 00:22:09.157 IO depths : 1=0.9%, 2=1.9%, 4=3.8%, 8=7.5%, 16=15.1%, 32=30.2%, >=64=40.6% 00:22:09.157 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:09.157 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:22:09.157 issued rwts: total=106,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:09.157 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:09.157 job1: (groupid=0, jobs=1): err= 0: pid=3121011: Thu May 16 20:31:21 2024 00:22:09.157 read: IOPS=13, BW=13.3MiB/s (14.0MB/s)(188MiB/14126msec) 00:22:09.157 slat (usec): min=348, max=2079.3k, avg=63868.44, stdev=332717.68 00:22:09.157 clat (msec): min=487, max=13625, avg=9218.81, stdev=5158.92 00:22:09.157 lat (msec): min=490, max=13643, avg=9282.68, stdev=5138.81 00:22:09.157 clat percentiles (msec): 00:22:09.157 | 1.00th=[ 489], 5.00th=[ 527], 10.00th=[ 575], 20.00th=[ 3037], 00:22:09.157 | 30.00th=[ 6342], 40.00th=[ 8557], 50.00th=[12818], 60.00th=[13355], 00:22:09.157 | 70.00th=[13489], 80.00th=[13489], 90.00th=[13489], 95.00th=[13624], 00:22:09.157 | 99.00th=[13624], 99.50th=[13624], 99.90th=[13624], 99.95th=[13624], 00:22:09.157 | 99.99th=[13624] 00:22:09.157 bw ( KiB/s): min= 2048, max=57344, per=0.71%, avg=17821.57, stdev=18284.44, samples=7 00:22:09.157 iops : min= 2, max= 56, avg=17.29, stdev=17.93, samples=7 00:22:09.157 lat (msec) : 500=3.72%, 750=11.17%, 1000=4.26%, >=2000=80.85% 00:22:09.157 cpu : usr=0.00%, sys=0.49%, ctx=368, majf=0, minf=32769 00:22:09.157 IO depths : 1=0.5%, 2=1.1%, 4=2.1%, 8=4.3%, 16=8.5%, 32=17.0%, >=64=66.5% 00:22:09.157 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:09.157 complete : 0=0.0%, 4=98.4%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=1.6% 00:22:09.157 issued rwts: total=188,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:09.157 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:09.157 job1: (groupid=0, jobs=1): err= 0: pid=3121012: Thu May 16 20:31:21 2024 00:22:09.157 read: IOPS=22, BW=22.4MiB/s (23.5MB/s)(316MiB/14098msec) 00:22:09.157 slat (usec): min=49, max=2081.9k, avg=37781.94, stdev=257154.15 00:22:09.157 clat (msec): min=261, max=13133, avg=5452.70, stdev=5885.47 00:22:09.157 lat (msec): min=262, max=13135, avg=5490.48, stdev=5895.89 00:22:09.157 clat percentiles (msec): 00:22:09.157 | 1.00th=[ 262], 5.00th=[ 264], 10.00th=[ 266], 20.00th=[ 271], 00:22:09.157 | 30.00th=[ 338], 40.00th=[ 477], 50.00th=[ 651], 60.00th=[ 6678], 00:22:09.157 | 70.00th=[12953], 80.00th=[12953], 90.00th=[13087], 95.00th=[13087], 00:22:09.157 | 99.00th=[13087], 99.50th=[13087], 99.90th=[13087], 99.95th=[13087], 00:22:09.157 | 99.99th=[13087] 00:22:09.157 bw ( KiB/s): min= 2048, max=180224, per=1.92%, avg=48371.25, stdev=78321.83, samples=8 00:22:09.157 iops : min= 2, max= 176, avg=47.12, stdev=76.55, samples=8 00:22:09.157 lat (msec) : 500=41.14%, 750=11.71%, 1000=1.27%, >=2000=45.89% 00:22:09.157 cpu : usr=0.00%, sys=0.57%, ctx=510, majf=0, minf=32769 00:22:09.157 IO depths : 1=0.3%, 2=0.6%, 4=1.3%, 8=2.5%, 16=5.1%, 32=10.1%, >=64=80.1% 00:22:09.157 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:09.157 complete : 0=0.0%, 4=99.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.5% 00:22:09.157 issued rwts: total=316,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:09.157 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:09.157 job1: (groupid=0, jobs=1): err= 0: pid=3121013: Thu May 16 20:31:21 2024 00:22:09.157 read: IOPS=3, BW=3184KiB/s (3261kB/s)(44.0MiB/14150msec) 00:22:09.157 slat (usec): min=442, max=2105.8k, avg=272567.74, stdev=674438.38 00:22:09.157 clat (msec): min=2156, max=14141, avg=11626.55, stdev=3671.76 00:22:09.157 lat (msec): min=4201, max=14149, avg=11899.11, stdev=3386.48 00:22:09.157 clat percentiles (msec): 00:22:09.157 | 1.00th=[ 2165], 5.00th=[ 4245], 10.00th=[ 6409], 20.00th=[ 8490], 00:22:09.157 | 30.00th=[10671], 40.00th=[14026], 50.00th=[14026], 60.00th=[14026], 00:22:09.157 | 70.00th=[14160], 80.00th=[14160], 90.00th=[14160], 95.00th=[14160], 00:22:09.157 | 99.00th=[14160], 99.50th=[14160], 99.90th=[14160], 99.95th=[14160], 00:22:09.157 | 99.99th=[14160] 00:22:09.157 lat (msec) : >=2000=100.00% 00:22:09.157 cpu : usr=0.01%, sys=0.21%, ctx=73, majf=0, minf=11265 00:22:09.157 IO depths : 1=2.3%, 2=4.5%, 4=9.1%, 8=18.2%, 16=36.4%, 32=29.5%, >=64=0.0% 00:22:09.157 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:09.157 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:22:09.157 issued rwts: total=44,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:09.157 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:09.157 job1: (groupid=0, jobs=1): err= 0: pid=3121014: Thu May 16 20:31:21 2024 00:22:09.157 read: IOPS=3, BW=3639KiB/s (3726kB/s)(50.0MiB/14071msec) 00:22:09.157 slat (usec): min=661, max=2117.1k, avg=239353.94, stdev=638838.50 00:22:09.157 clat (msec): min=2102, max=14069, avg=9655.08, stdev=3995.89 00:22:09.157 lat (msec): min=4156, max=14070, avg=9894.43, stdev=3891.32 00:22:09.157 clat percentiles (msec): 00:22:09.157 | 1.00th=[ 2106], 5.00th=[ 4178], 10.00th=[ 4212], 20.00th=[ 4279], 00:22:09.157 | 30.00th=[ 6342], 40.00th=[ 8490], 50.00th=[10671], 60.00th=[10671], 00:22:09.157 | 70.00th=[13892], 80.00th=[14026], 90.00th=[14026], 95.00th=[14026], 00:22:09.157 | 99.00th=[14026], 99.50th=[14026], 99.90th=[14026], 99.95th=[14026], 00:22:09.157 | 99.99th=[14026] 00:22:09.157 lat (msec) : >=2000=100.00% 00:22:09.157 cpu : usr=0.01%, sys=0.25%, ctx=70, majf=0, minf=12801 00:22:09.157 IO depths : 1=2.0%, 2=4.0%, 4=8.0%, 8=16.0%, 16=32.0%, 32=38.0%, >=64=0.0% 00:22:09.157 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:09.157 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:22:09.157 issued rwts: total=50,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:09.157 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:09.157 job1: (groupid=0, jobs=1): err= 0: pid=3121015: Thu May 16 20:31:21 2024 00:22:09.157 read: IOPS=3, BW=3929KiB/s (4023kB/s)(54.0MiB/14075msec) 00:22:09.157 slat (usec): min=696, max=2086.8k, avg=221660.14, stdev=615543.75 00:22:09.157 clat (msec): min=2104, max=14073, avg=10733.13, stdev=3665.79 00:22:09.157 lat (msec): min=4169, max=14074, avg=10954.79, stdev=3491.98 00:22:09.157 clat percentiles (msec): 00:22:09.157 | 1.00th=[ 2106], 5.00th=[ 4178], 10.00th=[ 4279], 20.00th=[ 6409], 00:22:09.157 | 30.00th=[ 8490], 40.00th=[10671], 50.00th=[12818], 60.00th=[12818], 00:22:09.157 | 70.00th=[13892], 80.00th=[14026], 90.00th=[14026], 95.00th=[14026], 00:22:09.157 | 99.00th=[14026], 99.50th=[14026], 99.90th=[14026], 99.95th=[14026], 00:22:09.157 | 99.99th=[14026] 00:22:09.157 lat (msec) : >=2000=100.00% 00:22:09.157 cpu : usr=0.00%, sys=0.30%, ctx=71, majf=0, minf=13825 00:22:09.157 IO depths : 1=1.9%, 2=3.7%, 4=7.4%, 8=14.8%, 16=29.6%, 32=42.6%, >=64=0.0% 00:22:09.157 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:09.157 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:22:09.157 issued rwts: total=54,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:09.157 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:09.157 job1: (groupid=0, jobs=1): err= 0: pid=3121016: Thu May 16 20:31:21 2024 00:22:09.157 read: IOPS=21, BW=21.6MiB/s (22.7MB/s)(306MiB/14145msec) 00:22:09.157 slat (usec): min=114, max=2091.3k, avg=39337.11, stdev=255394.03 00:22:09.157 clat (msec): min=740, max=13104, avg=5728.19, stdev=5684.27 00:22:09.157 lat (msec): min=741, max=13104, avg=5767.52, stdev=5693.33 00:22:09.157 clat percentiles (msec): 00:22:09.157 | 1.00th=[ 751], 5.00th=[ 760], 10.00th=[ 760], 20.00th=[ 768], 00:22:09.157 | 30.00th=[ 776], 40.00th=[ 785], 50.00th=[ 810], 60.00th=[ 8490], 00:22:09.157 | 70.00th=[12550], 80.00th=[12684], 90.00th=[12953], 95.00th=[12953], 00:22:09.157 | 99.00th=[13087], 99.50th=[13087], 99.90th=[13087], 99.95th=[13087], 00:22:09.157 | 99.99th=[13087] 00:22:09.157 bw ( KiB/s): min= 1961, max=159744, per=1.62%, avg=40722.78, stdev=57728.97, samples=9 00:22:09.157 iops : min= 1, max= 156, avg=39.67, stdev=56.45, samples=9 00:22:09.157 lat (msec) : 750=2.61%, 1000=51.31%, >=2000=46.08% 00:22:09.157 cpu : usr=0.03%, sys=0.85%, ctx=297, majf=0, minf=32769 00:22:09.157 IO depths : 1=0.3%, 2=0.7%, 4=1.3%, 8=2.6%, 16=5.2%, 32=10.5%, >=64=79.4% 00:22:09.158 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:09.158 complete : 0=0.0%, 4=99.4%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.6% 00:22:09.158 issued rwts: total=306,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:09.158 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:09.158 job1: (groupid=0, jobs=1): err= 0: pid=3121017: Thu May 16 20:31:21 2024 00:22:09.158 read: IOPS=1, BW=1236KiB/s (1265kB/s)(17.0MiB/14087msec) 00:22:09.158 slat (msec): min=16, max=4210, avg=703.72, stdev=1213.84 00:22:09.158 clat (msec): min=2122, max=14069, avg=9522.29, stdev=4293.70 00:22:09.158 lat (msec): min=4199, max=14086, avg=10226.01, stdev=3973.61 00:22:09.158 clat percentiles (msec): 00:22:09.158 | 1.00th=[ 2123], 5.00th=[ 2123], 10.00th=[ 4212], 20.00th=[ 4245], 00:22:09.158 | 30.00th=[ 8490], 40.00th=[ 8557], 50.00th=[10671], 60.00th=[12818], 00:22:09.158 | 70.00th=[12818], 80.00th=[14026], 90.00th=[14026], 95.00th=[14026], 00:22:09.158 | 99.00th=[14026], 99.50th=[14026], 99.90th=[14026], 99.95th=[14026], 00:22:09.158 | 99.99th=[14026] 00:22:09.158 lat (msec) : >=2000=100.00% 00:22:09.158 cpu : usr=0.00%, sys=0.10%, ctx=60, majf=0, minf=4353 00:22:09.158 IO depths : 1=5.9%, 2=11.8%, 4=23.5%, 8=47.1%, 16=11.8%, 32=0.0%, >=64=0.0% 00:22:09.158 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:09.158 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:22:09.158 issued rwts: total=17,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:09.158 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:09.158 job1: (groupid=0, jobs=1): err= 0: pid=3121018: Thu May 16 20:31:21 2024 00:22:09.158 read: IOPS=4, BW=4330KiB/s (4434kB/s)(60.0MiB/14190msec) 00:22:09.158 slat (usec): min=677, max=2096.3k, avg=200967.62, stdev=589157.84 00:22:09.158 clat (msec): min=2130, max=14188, avg=11498.30, stdev=3610.82 00:22:09.158 lat (msec): min=4225, max=14189, avg=11699.27, stdev=3410.63 00:22:09.158 clat percentiles (msec): 00:22:09.158 | 1.00th=[ 2140], 5.00th=[ 4245], 10.00th=[ 6342], 20.00th=[ 6409], 00:22:09.158 | 30.00th=[10671], 40.00th=[12818], 50.00th=[14026], 60.00th=[14026], 00:22:09.158 | 70.00th=[14160], 80.00th=[14160], 90.00th=[14160], 95.00th=[14160], 00:22:09.158 | 99.00th=[14160], 99.50th=[14160], 99.90th=[14160], 99.95th=[14160], 00:22:09.158 | 99.99th=[14160] 00:22:09.158 lat (msec) : >=2000=100.00% 00:22:09.158 cpu : usr=0.00%, sys=0.32%, ctx=104, majf=0, minf=15361 00:22:09.158 IO depths : 1=1.7%, 2=3.3%, 4=6.7%, 8=13.3%, 16=26.7%, 32=48.3%, >=64=0.0% 00:22:09.158 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:09.158 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:22:09.158 issued rwts: total=60,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:09.158 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:09.158 job1: (groupid=0, jobs=1): err= 0: pid=3121019: Thu May 16 20:31:21 2024 00:22:09.158 read: IOPS=3, BW=3406KiB/s (3487kB/s)(47.0MiB/14132msec) 00:22:09.158 slat (usec): min=748, max=2143.0k, avg=255362.15, stdev=660104.87 00:22:09.158 clat (msec): min=2129, max=14128, avg=10285.00, stdev=4019.20 00:22:09.158 lat (msec): min=4203, max=14131, avg=10540.36, stdev=3868.21 00:22:09.158 clat percentiles (msec): 00:22:09.158 | 1.00th=[ 2123], 5.00th=[ 4212], 10.00th=[ 4245], 20.00th=[ 6342], 00:22:09.158 | 30.00th=[ 6409], 40.00th=[10671], 50.00th=[10671], 60.00th=[12818], 00:22:09.158 | 70.00th=[14160], 80.00th=[14160], 90.00th=[14160], 95.00th=[14160], 00:22:09.158 | 99.00th=[14160], 99.50th=[14160], 99.90th=[14160], 99.95th=[14160], 00:22:09.158 | 99.99th=[14160] 00:22:09.158 lat (msec) : >=2000=100.00% 00:22:09.158 cpu : usr=0.00%, sys=0.27%, ctx=59, majf=0, minf=12033 00:22:09.158 IO depths : 1=2.1%, 2=4.3%, 4=8.5%, 8=17.0%, 16=34.0%, 32=34.0%, >=64=0.0% 00:22:09.158 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:09.158 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:22:09.158 issued rwts: total=47,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:09.158 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:09.158 job1: (groupid=0, jobs=1): err= 0: pid=3121020: Thu May 16 20:31:21 2024 00:22:09.158 read: IOPS=3, BW=3383KiB/s (3464kB/s)(47.0MiB/14227msec) 00:22:09.158 slat (usec): min=758, max=2148.4k, avg=212772.07, stdev=608940.20 00:22:09.158 clat (msec): min=4226, max=14225, avg=12965.01, stdev=2711.30 00:22:09.158 lat (msec): min=4279, max=14226, avg=13177.78, stdev=2383.16 00:22:09.158 clat percentiles (msec): 00:22:09.158 | 1.00th=[ 4212], 5.00th=[ 6409], 10.00th=[ 8557], 20.00th=[12818], 00:22:09.158 | 30.00th=[14026], 40.00th=[14160], 50.00th=[14160], 60.00th=[14160], 00:22:09.158 | 70.00th=[14160], 80.00th=[14160], 90.00th=[14160], 95.00th=[14160], 00:22:09.158 | 99.00th=[14160], 99.50th=[14160], 99.90th=[14160], 99.95th=[14160], 00:22:09.158 | 99.99th=[14160] 00:22:09.158 lat (msec) : >=2000=100.00% 00:22:09.158 cpu : usr=0.00%, sys=0.27%, ctx=92, majf=0, minf=12033 00:22:09.158 IO depths : 1=2.1%, 2=4.3%, 4=8.5%, 8=17.0%, 16=34.0%, 32=34.0%, >=64=0.0% 00:22:09.158 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:09.158 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:22:09.158 issued rwts: total=47,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:09.158 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:09.158 job1: (groupid=0, jobs=1): err= 0: pid=3121021: Thu May 16 20:31:21 2024 00:22:09.158 read: IOPS=114, BW=114MiB/s (120MB/s)(1620MiB/14201msec) 00:22:09.158 slat (usec): min=53, max=2103.1k, avg=6173.97, stdev=93785.07 00:22:09.158 clat (msec): min=111, max=12854, avg=885.82, stdev=2319.12 00:22:09.158 lat (msec): min=111, max=13962, avg=892.00, stdev=2335.67 00:22:09.158 clat percentiles (msec): 00:22:09.158 | 1.00th=[ 112], 5.00th=[ 112], 10.00th=[ 112], 20.00th=[ 113], 00:22:09.158 | 30.00th=[ 113], 40.00th=[ 114], 50.00th=[ 114], 60.00th=[ 115], 00:22:09.158 | 70.00th=[ 236], 80.00th=[ 249], 90.00th=[ 2333], 95.00th=[ 8658], 00:22:09.158 | 99.00th=[ 8792], 99.50th=[10671], 99.90th=[12818], 99.95th=[12818], 00:22:09.158 | 99.99th=[12818] 00:22:09.158 bw ( KiB/s): min=14336, max=1071104, per=17.35%, avg=436809.14, stdev=459956.84, samples=7 00:22:09.158 iops : min= 14, max= 1046, avg=426.57, stdev=449.18, samples=7 00:22:09.158 lat (msec) : 250=84.38%, 500=5.43%, >=2000=10.19% 00:22:09.158 cpu : usr=0.00%, sys=1.20%, ctx=1571, majf=0, minf=32769 00:22:09.158 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.5%, 16=1.0%, 32=2.0%, >=64=96.1% 00:22:09.158 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:09.158 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:09.158 issued rwts: total=1620,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:09.158 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:09.158 job1: (groupid=0, jobs=1): err= 0: pid=3121022: Thu May 16 20:31:21 2024 00:22:09.158 read: IOPS=2, BW=2459KiB/s (2518kB/s)(34.0MiB/14161msec) 00:22:09.158 slat (msec): min=2, max=2111, avg=353.76, stdev=756.60 00:22:09.158 clat (msec): min=2132, max=14158, avg=10323.61, stdev=4080.56 00:22:09.158 lat (msec): min=4213, max=14160, avg=10677.37, stdev=3864.57 00:22:09.158 clat percentiles (msec): 00:22:09.158 | 1.00th=[ 2140], 5.00th=[ 4212], 10.00th=[ 4245], 20.00th=[ 6409], 00:22:09.158 | 30.00th=[ 6477], 40.00th=[10671], 50.00th=[10805], 60.00th=[14026], 00:22:09.158 | 70.00th=[14026], 80.00th=[14160], 90.00th=[14160], 95.00th=[14160], 00:22:09.158 | 99.00th=[14160], 99.50th=[14160], 99.90th=[14160], 99.95th=[14160], 00:22:09.158 | 99.99th=[14160] 00:22:09.158 lat (msec) : >=2000=100.00% 00:22:09.158 cpu : usr=0.01%, sys=0.18%, ctx=72, majf=0, minf=8705 00:22:09.158 IO depths : 1=2.9%, 2=5.9%, 4=11.8%, 8=23.5%, 16=47.1%, 32=8.8%, >=64=0.0% 00:22:09.158 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:09.158 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:22:09.158 issued rwts: total=34,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:09.158 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:09.158 job2: (groupid=0, jobs=1): err= 0: pid=3121029: Thu May 16 20:31:21 2024 00:22:09.158 read: IOPS=6, BW=6637KiB/s (6796kB/s)(78.0MiB/12034msec) 00:22:09.158 slat (usec): min=745, max=2079.8k, avg=128224.69, stdev=465472.84 00:22:09.158 clat (msec): min=2031, max=12033, avg=7243.75, stdev=3815.25 00:22:09.158 lat (msec): min=2036, max=12033, avg=7371.98, stdev=3805.88 00:22:09.158 clat percentiles (msec): 00:22:09.158 | 1.00th=[ 2039], 5.00th=[ 2039], 10.00th=[ 2123], 20.00th=[ 4178], 00:22:09.158 | 30.00th=[ 4245], 40.00th=[ 4329], 50.00th=[ 6477], 60.00th=[ 8658], 00:22:09.158 | 70.00th=[10805], 80.00th=[11879], 90.00th=[12013], 95.00th=[12013], 00:22:09.158 | 99.00th=[12013], 99.50th=[12013], 99.90th=[12013], 99.95th=[12013], 00:22:09.158 | 99.99th=[12013] 00:22:09.158 lat (msec) : >=2000=100.00% 00:22:09.158 cpu : usr=0.00%, sys=0.45%, ctx=119, majf=0, minf=19969 00:22:09.158 IO depths : 1=1.3%, 2=2.6%, 4=5.1%, 8=10.3%, 16=20.5%, 32=41.0%, >=64=19.2% 00:22:09.158 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:09.158 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:22:09.158 issued rwts: total=78,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:09.158 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:09.158 job2: (groupid=0, jobs=1): err= 0: pid=3121030: Thu May 16 20:31:21 2024 00:22:09.158 read: IOPS=66, BW=66.9MiB/s (70.2MB/s)(673MiB/10059msec) 00:22:09.158 slat (usec): min=40, max=2111.4k, avg=14874.26, stdev=122350.20 00:22:09.158 clat (msec): min=44, max=7196, avg=851.93, stdev=983.42 00:22:09.158 lat (msec): min=75, max=7205, avg=866.80, stdev=1015.56 00:22:09.158 clat percentiles (msec): 00:22:09.158 | 1.00th=[ 94], 5.00th=[ 326], 10.00th=[ 472], 20.00th=[ 489], 00:22:09.158 | 30.00th=[ 498], 40.00th=[ 510], 50.00th=[ 518], 60.00th=[ 542], 00:22:09.158 | 70.00th=[ 567], 80.00th=[ 911], 90.00th=[ 1787], 95.00th=[ 2089], 00:22:09.158 | 99.00th=[ 7080], 99.50th=[ 7148], 99.90th=[ 7215], 99.95th=[ 7215], 00:22:09.158 | 99.99th=[ 7215] 00:22:09.158 bw ( KiB/s): min=30720, max=262144, per=7.38%, avg=185915.50, stdev=89481.01, samples=6 00:22:09.158 iops : min= 30, max= 256, avg=181.33, stdev=87.32, samples=6 00:22:09.158 lat (msec) : 50=0.15%, 100=0.89%, 250=1.78%, 500=30.91%, 750=45.47% 00:22:09.158 lat (msec) : 1000=1.34%, 2000=12.48%, >=2000=6.98% 00:22:09.158 cpu : usr=0.00%, sys=1.45%, ctx=845, majf=0, minf=32769 00:22:09.158 IO depths : 1=0.1%, 2=0.3%, 4=0.6%, 8=1.2%, 16=2.4%, 32=4.8%, >=64=90.6% 00:22:09.158 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:09.158 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:22:09.158 issued rwts: total=673,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:09.158 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:09.158 job2: (groupid=0, jobs=1): err= 0: pid=3121031: Thu May 16 20:31:21 2024 00:22:09.158 read: IOPS=2, BW=2324KiB/s (2380kB/s)(32.0MiB/14101msec) 00:22:09.158 slat (usec): min=723, max=2109.3k, avg=375114.95, stdev=771165.36 00:22:09.158 clat (msec): min=2096, max=14099, avg=10466.36, stdev=4214.03 00:22:09.158 lat (msec): min=4159, max=14100, avg=10841.48, stdev=3972.29 00:22:09.158 clat percentiles (msec): 00:22:09.158 | 1.00th=[ 2089], 5.00th=[ 4144], 10.00th=[ 4212], 20.00th=[ 4279], 00:22:09.158 | 30.00th=[ 8490], 40.00th=[10671], 50.00th=[12818], 60.00th=[13892], 00:22:09.159 | 70.00th=[14026], 80.00th=[14026], 90.00th=[14160], 95.00th=[14160], 00:22:09.159 | 99.00th=[14160], 99.50th=[14160], 99.90th=[14160], 99.95th=[14160], 00:22:09.159 | 99.99th=[14160] 00:22:09.159 lat (msec) : >=2000=100.00% 00:22:09.159 cpu : usr=0.00%, sys=0.19%, ctx=72, majf=0, minf=8193 00:22:09.159 IO depths : 1=3.1%, 2=6.2%, 4=12.5%, 8=25.0%, 16=50.0%, 32=3.1%, >=64=0.0% 00:22:09.159 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:09.159 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:22:09.159 issued rwts: total=32,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:09.159 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:09.159 job2: (groupid=0, jobs=1): err= 0: pid=3121032: Thu May 16 20:31:21 2024 00:22:09.159 read: IOPS=4, BW=4142KiB/s (4241kB/s)(57.0MiB/14093msec) 00:22:09.159 slat (usec): min=581, max=2120.8k, avg=210605.43, stdev=601855.73 00:22:09.159 clat (msec): min=2087, max=14091, avg=9196.16, stdev=3232.60 00:22:09.159 lat (msec): min=4161, max=14092, avg=9406.76, stdev=3151.25 00:22:09.159 clat percentiles (msec): 00:22:09.159 | 1.00th=[ 2089], 5.00th=[ 4178], 10.00th=[ 4212], 20.00th=[ 6342], 00:22:09.159 | 30.00th=[ 8423], 40.00th=[ 8423], 50.00th=[ 8557], 60.00th=[ 8557], 00:22:09.159 | 70.00th=[10671], 80.00th=[13892], 90.00th=[14026], 95.00th=[14026], 00:22:09.159 | 99.00th=[14026], 99.50th=[14026], 99.90th=[14026], 99.95th=[14026], 00:22:09.159 | 99.99th=[14026] 00:22:09.159 lat (msec) : >=2000=100.00% 00:22:09.159 cpu : usr=0.00%, sys=0.29%, ctx=71, majf=0, minf=14593 00:22:09.159 IO depths : 1=1.8%, 2=3.5%, 4=7.0%, 8=14.0%, 16=28.1%, 32=45.6%, >=64=0.0% 00:22:09.159 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:09.159 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:22:09.159 issued rwts: total=57,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:09.159 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:09.159 job2: (groupid=0, jobs=1): err= 0: pid=3121033: Thu May 16 20:31:21 2024 00:22:09.159 read: IOPS=23, BW=23.7MiB/s (24.9MB/s)(336MiB/14174msec) 00:22:09.159 slat (usec): min=57, max=4228.2k, avg=35918.62, stdev=295678.11 00:22:09.159 clat (msec): min=257, max=12705, avg=4498.91, stdev=4477.91 00:22:09.159 lat (msec): min=258, max=12707, avg=4534.83, stdev=4494.17 00:22:09.159 clat percentiles (msec): 00:22:09.159 | 1.00th=[ 259], 5.00th=[ 259], 10.00th=[ 262], 20.00th=[ 268], 00:22:09.159 | 30.00th=[ 271], 40.00th=[ 317], 50.00th=[ 2467], 60.00th=[ 3708], 00:22:09.159 | 70.00th=[10134], 80.00th=[10134], 90.00th=[10268], 95.00th=[10268], 00:22:09.159 | 99.00th=[10268], 99.50th=[10671], 99.90th=[12684], 99.95th=[12684], 00:22:09.159 | 99.99th=[12684] 00:22:09.159 bw ( KiB/s): min= 1815, max=299008, per=3.40%, avg=85559.80, stdev=128061.81, samples=5 00:22:09.159 iops : min= 1, max= 292, avg=83.40, stdev=125.19, samples=5 00:22:09.159 lat (msec) : 500=43.75%, 2000=0.89%, >=2000=55.36% 00:22:09.159 cpu : usr=0.01%, sys=0.73%, ctx=306, majf=0, minf=32769 00:22:09.159 IO depths : 1=0.3%, 2=0.6%, 4=1.2%, 8=2.4%, 16=4.8%, 32=9.5%, >=64=81.2% 00:22:09.159 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:09.159 complete : 0=0.0%, 4=99.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.5% 00:22:09.159 issued rwts: total=336,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:09.159 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:09.159 job2: (groupid=0, jobs=1): err= 0: pid=3121034: Thu May 16 20:31:21 2024 00:22:09.159 read: IOPS=2, BW=2897KiB/s (2966kB/s)(40.0MiB/14141msec) 00:22:09.159 slat (usec): min=713, max=2093.1k, avg=300512.85, stdev=705371.86 00:22:09.159 clat (msec): min=2119, max=14139, avg=10402.90, stdev=4226.12 00:22:09.159 lat (msec): min=4190, max=14140, avg=10703.41, stdev=4045.54 00:22:09.159 clat percentiles (msec): 00:22:09.159 | 1.00th=[ 2123], 5.00th=[ 4178], 10.00th=[ 4212], 20.00th=[ 4279], 00:22:09.159 | 30.00th=[ 6409], 40.00th=[10671], 50.00th=[12818], 60.00th=[14026], 00:22:09.159 | 70.00th=[14160], 80.00th=[14160], 90.00th=[14160], 95.00th=[14160], 00:22:09.159 | 99.00th=[14160], 99.50th=[14160], 99.90th=[14160], 99.95th=[14160], 00:22:09.159 | 99.99th=[14160] 00:22:09.159 lat (msec) : >=2000=100.00% 00:22:09.159 cpu : usr=0.00%, sys=0.23%, ctx=63, majf=0, minf=10241 00:22:09.159 IO depths : 1=2.5%, 2=5.0%, 4=10.0%, 8=20.0%, 16=40.0%, 32=22.5%, >=64=0.0% 00:22:09.159 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:09.159 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:22:09.159 issued rwts: total=40,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:09.159 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:09.159 job2: (groupid=0, jobs=1): err= 0: pid=3121035: Thu May 16 20:31:21 2024 00:22:09.159 read: IOPS=2, BW=2448KiB/s (2506kB/s)(34.0MiB/14224msec) 00:22:09.159 slat (usec): min=680, max=2132.2k, avg=294476.43, stdev=698894.11 00:22:09.159 clat (msec): min=4210, max=14222, avg=12512.63, stdev=3088.08 00:22:09.159 lat (msec): min=4253, max=14223, avg=12807.11, stdev=2728.94 00:22:09.159 clat percentiles (msec): 00:22:09.159 | 1.00th=[ 4212], 5.00th=[ 4245], 10.00th=[ 6409], 20.00th=[10671], 00:22:09.159 | 30.00th=[13892], 40.00th=[14026], 50.00th=[14160], 60.00th=[14160], 00:22:09.159 | 70.00th=[14160], 80.00th=[14160], 90.00th=[14160], 95.00th=[14160], 00:22:09.159 | 99.00th=[14160], 99.50th=[14160], 99.90th=[14160], 99.95th=[14160], 00:22:09.159 | 99.99th=[14160] 00:22:09.159 lat (msec) : >=2000=100.00% 00:22:09.159 cpu : usr=0.00%, sys=0.15%, ctx=93, majf=0, minf=8705 00:22:09.159 IO depths : 1=2.9%, 2=5.9%, 4=11.8%, 8=23.5%, 16=47.1%, 32=8.8%, >=64=0.0% 00:22:09.159 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:09.159 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:22:09.159 issued rwts: total=34,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:09.159 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:09.159 job2: (groupid=0, jobs=1): err= 0: pid=3121036: Thu May 16 20:31:21 2024 00:22:09.159 read: IOPS=2, BW=2518KiB/s (2579kB/s)(35.0MiB/14233msec) 00:22:09.159 slat (usec): min=547, max=2118.3k, avg=345478.84, stdev=751441.08 00:22:09.159 clat (msec): min=2140, max=14230, avg=12125.49, stdev=3503.83 00:22:09.159 lat (msec): min=4248, max=14232, avg=12470.97, stdev=3058.10 00:22:09.159 clat percentiles (msec): 00:22:09.159 | 1.00th=[ 2140], 5.00th=[ 4245], 10.00th=[ 6409], 20.00th=[ 8557], 00:22:09.159 | 30.00th=[12818], 40.00th=[14026], 50.00th=[14160], 60.00th=[14160], 00:22:09.159 | 70.00th=[14160], 80.00th=[14160], 90.00th=[14295], 95.00th=[14295], 00:22:09.159 | 99.00th=[14295], 99.50th=[14295], 99.90th=[14295], 99.95th=[14295], 00:22:09.159 | 99.99th=[14295] 00:22:09.159 lat (msec) : >=2000=100.00% 00:22:09.159 cpu : usr=0.00%, sys=0.15%, ctx=92, majf=0, minf=8961 00:22:09.159 IO depths : 1=2.9%, 2=5.7%, 4=11.4%, 8=22.9%, 16=45.7%, 32=11.4%, >=64=0.0% 00:22:09.159 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:09.159 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:22:09.159 issued rwts: total=35,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:09.159 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:09.159 job2: (groupid=0, jobs=1): err= 0: pid=3121037: Thu May 16 20:31:21 2024 00:22:09.159 read: IOPS=1, BW=2016KiB/s (2064kB/s)(28.0MiB/14223msec) 00:22:09.159 slat (usec): min=892, max=2117.7k, avg=357225.65, stdev=755863.66 00:22:09.159 clat (msec): min=4220, max=14222, avg=11863.83, stdev=3438.51 00:22:09.159 lat (msec): min=4263, max=14222, avg=12221.06, stdev=3119.85 00:22:09.159 clat percentiles (msec): 00:22:09.159 | 1.00th=[ 4212], 5.00th=[ 4279], 10.00th=[ 6409], 20.00th=[ 8557], 00:22:09.159 | 30.00th=[10671], 40.00th=[13892], 50.00th=[14026], 60.00th=[14026], 00:22:09.159 | 70.00th=[14160], 80.00th=[14160], 90.00th=[14160], 95.00th=[14160], 00:22:09.159 | 99.00th=[14160], 99.50th=[14160], 99.90th=[14160], 99.95th=[14160], 00:22:09.159 | 99.99th=[14160] 00:22:09.159 lat (msec) : >=2000=100.00% 00:22:09.159 cpu : usr=0.00%, sys=0.12%, ctx=82, majf=0, minf=7169 00:22:09.159 IO depths : 1=3.6%, 2=7.1%, 4=14.3%, 8=28.6%, 16=46.4%, 32=0.0%, >=64=0.0% 00:22:09.159 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:09.159 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:22:09.159 issued rwts: total=28,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:09.159 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:09.159 job2: (groupid=0, jobs=1): err= 0: pid=3121038: Thu May 16 20:31:21 2024 00:22:09.159 read: IOPS=17, BW=17.5MiB/s (18.4MB/s)(248MiB/14142msec) 00:22:09.159 slat (usec): min=198, max=2112.6k, avg=48601.07, stdev=272999.83 00:22:09.159 clat (msec): min=1358, max=7761, avg=5134.58, stdev=2556.81 00:22:09.159 lat (msec): min=1366, max=7765, avg=5183.18, stdev=2541.39 00:22:09.159 clat percentiles (msec): 00:22:09.159 | 1.00th=[ 1368], 5.00th=[ 1385], 10.00th=[ 1385], 20.00th=[ 1401], 00:22:09.159 | 30.00th=[ 1469], 40.00th=[ 6477], 50.00th=[ 6544], 60.00th=[ 6678], 00:22:09.159 | 70.00th=[ 6879], 80.00th=[ 7148], 90.00th=[ 7483], 95.00th=[ 7617], 00:22:09.159 | 99.00th=[ 7752], 99.50th=[ 7752], 99.90th=[ 7752], 99.95th=[ 7752], 00:22:09.159 | 99.99th=[ 7752] 00:22:09.159 bw ( KiB/s): min= 1961, max=100352, per=1.64%, avg=41286.83, stdev=41067.00, samples=6 00:22:09.159 iops : min= 1, max= 98, avg=40.17, stdev=40.28, samples=6 00:22:09.159 lat (msec) : 2000=30.24%, >=2000=69.76% 00:22:09.159 cpu : usr=0.01%, sys=0.70%, ctx=402, majf=0, minf=32769 00:22:09.159 IO depths : 1=0.4%, 2=0.8%, 4=1.6%, 8=3.2%, 16=6.5%, 32=12.9%, >=64=74.6% 00:22:09.159 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:09.159 complete : 0=0.0%, 4=99.2%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.8% 00:22:09.159 issued rwts: total=248,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:09.159 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:09.159 job2: (groupid=0, jobs=1): err= 0: pid=3121039: Thu May 16 20:31:21 2024 00:22:09.159 read: IOPS=11, BW=11.8MiB/s (12.4MB/s)(167MiB/14161msec) 00:22:09.159 slat (usec): min=79, max=2131.8k, avg=72241.72, stdev=360477.43 00:22:09.159 clat (msec): min=546, max=13848, avg=10467.69, stdev=5150.87 00:22:09.159 lat (msec): min=550, max=13853, avg=10539.93, stdev=5112.74 00:22:09.159 clat percentiles (msec): 00:22:09.159 | 1.00th=[ 550], 5.00th=[ 617], 10.00th=[ 676], 20.00th=[ 4245], 00:22:09.159 | 30.00th=[12818], 40.00th=[13355], 50.00th=[13489], 60.00th=[13624], 00:22:09.159 | 70.00th=[13624], 80.00th=[13758], 90.00th=[13758], 95.00th=[13758], 00:22:09.159 | 99.00th=[13892], 99.50th=[13892], 99.90th=[13892], 99.95th=[13892], 00:22:09.159 | 99.99th=[13892] 00:22:09.159 bw ( KiB/s): min= 1961, max=55296, per=0.46%, avg=11689.71, stdev=19390.38, samples=7 00:22:09.159 iops : min= 1, max= 54, avg=11.14, stdev=19.10, samples=7 00:22:09.159 lat (msec) : 750=16.17%, 2000=1.80%, >=2000=82.04% 00:22:09.159 cpu : usr=0.00%, sys=0.71%, ctx=188, majf=0, minf=32769 00:22:09.159 IO depths : 1=0.6%, 2=1.2%, 4=2.4%, 8=4.8%, 16=9.6%, 32=19.2%, >=64=62.3% 00:22:09.159 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:09.159 complete : 0=0.0%, 4=97.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=2.4% 00:22:09.159 issued rwts: total=167,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:09.160 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:09.160 job2: (groupid=0, jobs=1): err= 0: pid=3121040: Thu May 16 20:31:21 2024 00:22:09.160 read: IOPS=4, BW=4347KiB/s (4451kB/s)(60.0MiB/14135msec) 00:22:09.160 slat (usec): min=362, max=2094.7k, avg=199925.85, stdev=590054.48 00:22:09.160 clat (msec): min=2138, max=14132, avg=11110.88, stdev=3670.82 00:22:09.160 lat (msec): min=4219, max=14134, avg=11310.81, stdev=3496.40 00:22:09.160 clat percentiles (msec): 00:22:09.160 | 1.00th=[ 2140], 5.00th=[ 4245], 10.00th=[ 4279], 20.00th=[ 6409], 00:22:09.160 | 30.00th=[ 8658], 40.00th=[10671], 50.00th=[12818], 60.00th=[14026], 00:22:09.160 | 70.00th=[14026], 80.00th=[14160], 90.00th=[14160], 95.00th=[14160], 00:22:09.160 | 99.00th=[14160], 99.50th=[14160], 99.90th=[14160], 99.95th=[14160], 00:22:09.160 | 99.99th=[14160] 00:22:09.160 lat (msec) : >=2000=100.00% 00:22:09.160 cpu : usr=0.00%, sys=0.27%, ctx=63, majf=0, minf=15361 00:22:09.160 IO depths : 1=1.7%, 2=3.3%, 4=6.7%, 8=13.3%, 16=26.7%, 32=48.3%, >=64=0.0% 00:22:09.160 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:09.160 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:22:09.160 issued rwts: total=60,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:09.160 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:09.160 job2: (groupid=0, jobs=1): err= 0: pid=3121041: Thu May 16 20:31:21 2024 00:22:09.160 read: IOPS=2, BW=2899KiB/s (2968kB/s)(40.0MiB/14131msec) 00:22:09.160 slat (usec): min=760, max=2184.5k, avg=300679.53, stdev=712369.23 00:22:09.160 clat (msec): min=2102, max=14129, avg=12131.01, stdev=3248.03 00:22:09.160 lat (msec): min=4160, max=14130, avg=12431.69, stdev=2825.06 00:22:09.160 clat percentiles (msec): 00:22:09.160 | 1.00th=[ 2106], 5.00th=[ 4178], 10.00th=[ 6342], 20.00th=[ 8557], 00:22:09.160 | 30.00th=[10671], 40.00th=[13892], 50.00th=[14026], 60.00th=[14026], 00:22:09.160 | 70.00th=[14160], 80.00th=[14160], 90.00th=[14160], 95.00th=[14160], 00:22:09.160 | 99.00th=[14160], 99.50th=[14160], 99.90th=[14160], 99.95th=[14160], 00:22:09.160 | 99.99th=[14160] 00:22:09.160 lat (msec) : >=2000=100.00% 00:22:09.160 cpu : usr=0.00%, sys=0.24%, ctx=76, majf=0, minf=10241 00:22:09.160 IO depths : 1=2.5%, 2=5.0%, 4=10.0%, 8=20.0%, 16=40.0%, 32=22.5%, >=64=0.0% 00:22:09.160 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:09.160 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:22:09.160 issued rwts: total=40,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:09.160 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:09.160 job3: (groupid=0, jobs=1): err= 0: pid=3121045: Thu May 16 20:31:21 2024 00:22:09.160 read: IOPS=121, BW=121MiB/s (127MB/s)(1717MiB/14147msec) 00:22:09.160 slat (usec): min=45, max=2120.1k, avg=7003.46, stdev=88128.85 00:22:09.160 clat (msec): min=224, max=8978, avg=1001.90, stdev=2188.88 00:22:09.160 lat (msec): min=225, max=8982, avg=1008.90, stdev=2196.52 00:22:09.160 clat percentiles (msec): 00:22:09.160 | 1.00th=[ 245], 5.00th=[ 245], 10.00th=[ 247], 20.00th=[ 247], 00:22:09.160 | 30.00th=[ 249], 40.00th=[ 255], 50.00th=[ 264], 60.00th=[ 347], 00:22:09.160 | 70.00th=[ 456], 80.00th=[ 617], 90.00th=[ 1011], 95.00th=[ 8658], 00:22:09.160 | 99.00th=[ 8926], 99.50th=[ 8926], 99.90th=[ 8926], 99.95th=[ 8926], 00:22:09.160 | 99.99th=[ 8926] 00:22:09.160 bw ( KiB/s): min= 2052, max=526336, per=9.95%, avg=250404.69, stdev=208964.51, samples=13 00:22:09.160 iops : min= 2, max= 514, avg=244.46, stdev=204.16, samples=13 00:22:09.160 lat (msec) : 250=32.62%, 500=38.26%, 750=13.40%, 1000=5.13%, 2000=2.91% 00:22:09.160 lat (msec) : >=2000=7.69% 00:22:09.160 cpu : usr=0.04%, sys=1.35%, ctx=1837, majf=0, minf=32769 00:22:09.160 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.5%, 16=0.9%, 32=1.9%, >=64=96.3% 00:22:09.160 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:09.160 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:09.160 issued rwts: total=1717,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:09.160 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:09.160 job3: (groupid=0, jobs=1): err= 0: pid=3121046: Thu May 16 20:31:21 2024 00:22:09.160 read: IOPS=119, BW=119MiB/s (125MB/s)(1206MiB/10104msec) 00:22:09.160 slat (usec): min=55, max=2065.3k, avg=8296.99, stdev=60003.65 00:22:09.160 clat (msec): min=88, max=2864, avg=990.68, stdev=755.24 00:22:09.160 lat (msec): min=131, max=2875, avg=998.98, stdev=758.25 00:22:09.160 clat percentiles (msec): 00:22:09.160 | 1.00th=[ 176], 5.00th=[ 397], 10.00th=[ 489], 20.00th=[ 493], 00:22:09.160 | 30.00th=[ 518], 40.00th=[ 550], 50.00th=[ 659], 60.00th=[ 793], 00:22:09.160 | 70.00th=[ 894], 80.00th=[ 1519], 90.00th=[ 2702], 95.00th=[ 2769], 00:22:09.160 | 99.00th=[ 2869], 99.50th=[ 2869], 99.90th=[ 2869], 99.95th=[ 2869], 00:22:09.160 | 99.99th=[ 2869] 00:22:09.160 bw ( KiB/s): min= 4096, max=264192, per=6.26%, avg=157667.64, stdev=81005.67, samples=14 00:22:09.160 iops : min= 4, max= 258, avg=153.93, stdev=79.08, samples=14 00:22:09.160 lat (msec) : 100=0.08%, 250=1.66%, 500=20.23%, 750=34.49%, 1000=20.48% 00:22:09.160 lat (msec) : 2000=7.55%, >=2000=15.51% 00:22:09.160 cpu : usr=0.04%, sys=2.31%, ctx=1371, majf=0, minf=32769 00:22:09.160 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.7%, 16=1.3%, 32=2.7%, >=64=94.8% 00:22:09.160 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:09.160 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:09.160 issued rwts: total=1206,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:09.160 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:09.160 job3: (groupid=0, jobs=1): err= 0: pid=3121047: Thu May 16 20:31:21 2024 00:22:09.160 read: IOPS=90, BW=90.1MiB/s (94.4MB/s)(904MiB/10038msec) 00:22:09.160 slat (usec): min=41, max=2104.3k, avg=11057.82, stdev=105010.55 00:22:09.160 clat (msec): min=34, max=6033, avg=666.70, stdev=514.63 00:22:09.160 lat (msec): min=40, max=6105, avg=677.76, stdev=545.96 00:22:09.160 clat percentiles (msec): 00:22:09.160 | 1.00th=[ 104], 5.00th=[ 313], 10.00th=[ 485], 20.00th=[ 489], 00:22:09.160 | 30.00th=[ 502], 40.00th=[ 523], 50.00th=[ 542], 60.00th=[ 575], 00:22:09.160 | 70.00th=[ 735], 80.00th=[ 810], 90.00th=[ 902], 95.00th=[ 919], 00:22:09.160 | 99.00th=[ 2802], 99.50th=[ 4866], 99.90th=[ 6007], 99.95th=[ 6007], 00:22:09.160 | 99.99th=[ 6007] 00:22:09.160 bw ( KiB/s): min=126976, max=270336, per=7.91%, avg=199241.14, stdev=61504.98, samples=7 00:22:09.160 iops : min= 124, max= 264, avg=194.57, stdev=60.06, samples=7 00:22:09.160 lat (msec) : 50=0.33%, 100=0.55%, 250=2.99%, 500=25.66%, 750=43.81% 00:22:09.160 lat (msec) : 1000=24.45%, >=2000=2.21% 00:22:09.160 cpu : usr=0.07%, sys=1.80%, ctx=792, majf=0, minf=32769 00:22:09.160 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.9%, 16=1.8%, 32=3.5%, >=64=93.0% 00:22:09.160 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:09.160 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:09.161 issued rwts: total=904,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:09.161 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:09.161 job3: (groupid=0, jobs=1): err= 0: pid=3121048: Thu May 16 20:31:21 2024 00:22:09.161 read: IOPS=76, BW=77.0MiB/s (80.7MB/s)(1089MiB/14151msec) 00:22:09.161 slat (usec): min=138, max=2161.6k, avg=11056.59, stdev=126925.34 00:22:09.161 clat (msec): min=237, max=11177, avg=1607.25, stdev=3224.60 00:22:09.161 lat (msec): min=239, max=11190, avg=1618.31, stdev=3236.74 00:22:09.161 clat percentiles (msec): 00:22:09.161 | 1.00th=[ 239], 5.00th=[ 241], 10.00th=[ 241], 20.00th=[ 245], 00:22:09.161 | 30.00th=[ 249], 40.00th=[ 334], 50.00th=[ 397], 60.00th=[ 451], 00:22:09.161 | 70.00th=[ 535], 80.00th=[ 558], 90.00th=[ 8490], 95.00th=[10939], 00:22:09.161 | 99.00th=[11208], 99.50th=[11208], 99.90th=[11208], 99.95th=[11208], 00:22:09.161 | 99.99th=[11208] 00:22:09.161 bw ( KiB/s): min= 2052, max=507904, per=7.82%, avg=196987.10, stdev=187210.26, samples=10 00:22:09.161 iops : min= 2, max= 496, avg=192.30, stdev=182.90, samples=10 00:22:09.161 lat (msec) : 250=30.67%, 500=33.24%, 750=21.95%, >=2000=14.14% 00:22:09.161 cpu : usr=0.02%, sys=1.43%, ctx=1997, majf=0, minf=32769 00:22:09.161 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.7%, 16=1.5%, 32=2.9%, >=64=94.2% 00:22:09.161 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:09.161 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:09.161 issued rwts: total=1089,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:09.161 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:09.161 job3: (groupid=0, jobs=1): err= 0: pid=3121049: Thu May 16 20:31:21 2024 00:22:09.161 read: IOPS=143, BW=143MiB/s (150MB/s)(2032MiB/14163msec) 00:22:09.161 slat (usec): min=47, max=2130.2k, avg=5930.29, stdev=80992.33 00:22:09.161 clat (msec): min=122, max=8836, avg=869.66, stdev=1997.15 00:22:09.161 lat (msec): min=122, max=8838, avg=875.59, stdev=2004.16 00:22:09.161 clat percentiles (msec): 00:22:09.161 | 1.00th=[ 124], 5.00th=[ 155], 10.00th=[ 218], 20.00th=[ 247], 00:22:09.161 | 30.00th=[ 251], 40.00th=[ 266], 50.00th=[ 334], 60.00th=[ 359], 00:22:09.161 | 70.00th=[ 498], 80.00th=[ 531], 90.00th=[ 600], 95.00th=[ 8658], 00:22:09.161 | 99.00th=[ 8792], 99.50th=[ 8792], 99.90th=[ 8792], 99.95th=[ 8792], 00:22:09.161 | 99.99th=[ 8792] 00:22:09.161 bw ( KiB/s): min= 1961, max=679936, per=10.32%, avg=259947.07, stdev=204956.58, samples=15 00:22:09.161 iops : min= 1, max= 664, avg=253.73, stdev=200.22, samples=15 00:22:09.161 lat (msec) : 250=28.99%, 500=42.42%, 750=21.90%, >=2000=6.69% 00:22:09.161 cpu : usr=0.04%, sys=1.69%, ctx=1958, majf=0, minf=32769 00:22:09.161 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=1.6%, >=64=96.9% 00:22:09.161 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:09.161 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:09.161 issued rwts: total=2032,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:09.161 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:09.161 job3: (groupid=0, jobs=1): err= 0: pid=3121050: Thu May 16 20:31:21 2024 00:22:09.161 read: IOPS=47, BW=47.7MiB/s (50.1MB/s)(675MiB/14140msec) 00:22:09.161 slat (usec): min=48, max=2073.6k, avg=17814.00, stdev=157952.51 00:22:09.161 clat (msec): min=227, max=10934, avg=2538.97, stdev=3778.61 00:22:09.161 lat (msec): min=228, max=10935, avg=2556.78, stdev=3790.28 00:22:09.161 clat percentiles (msec): 00:22:09.161 | 1.00th=[ 245], 5.00th=[ 247], 10.00th=[ 247], 20.00th=[ 251], 00:22:09.161 | 30.00th=[ 288], 40.00th=[ 651], 50.00th=[ 877], 60.00th=[ 969], 00:22:09.161 | 70.00th=[ 1250], 80.00th=[ 4530], 90.00th=[10805], 95.00th=[10939], 00:22:09.161 | 99.00th=[10939], 99.50th=[10939], 99.90th=[10939], 99.95th=[10939], 00:22:09.161 | 99.99th=[10939] 00:22:09.161 bw ( KiB/s): min= 1961, max=419840, per=4.46%, avg=112221.70, stdev=148777.76, samples=10 00:22:09.161 iops : min= 1, max= 410, avg=109.50, stdev=145.37, samples=10 00:22:09.161 lat (msec) : 250=19.85%, 500=16.89%, 750=6.07%, 1000=18.81%, 2000=15.85% 00:22:09.161 lat (msec) : >=2000=22.52% 00:22:09.161 cpu : usr=0.01%, sys=0.95%, ctx=811, majf=0, minf=32769 00:22:09.161 IO depths : 1=0.1%, 2=0.3%, 4=0.6%, 8=1.2%, 16=2.4%, 32=4.7%, >=64=90.7% 00:22:09.161 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:09.161 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:22:09.161 issued rwts: total=675,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:09.161 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:09.161 job3: (groupid=0, jobs=1): err= 0: pid=3121051: Thu May 16 20:31:21 2024 00:22:09.161 read: IOPS=6, BW=6233KiB/s (6383kB/s)(86.0MiB/14128msec) 00:22:09.161 slat (usec): min=406, max=2049.2k, avg=139693.17, stdev=488327.75 00:22:09.161 clat (msec): min=2113, max=14125, avg=10858.89, stdev=3527.57 00:22:09.161 lat (msec): min=4153, max=14127, avg=10998.58, stdev=3413.18 00:22:09.161 clat percentiles (msec): 00:22:09.161 | 1.00th=[ 2106], 5.00th=[ 4245], 10.00th=[ 6342], 20.00th=[ 6409], 00:22:09.161 | 30.00th=[ 8557], 40.00th=[10671], 50.00th=[12818], 60.00th=[12818], 00:22:09.161 | 70.00th=[14026], 80.00th=[14026], 90.00th=[14160], 95.00th=[14160], 00:22:09.161 | 99.00th=[14160], 99.50th=[14160], 99.90th=[14160], 99.95th=[14160], 00:22:09.161 | 99.99th=[14160] 00:22:09.161 lat (msec) : >=2000=100.00% 00:22:09.161 cpu : usr=0.00%, sys=0.47%, ctx=92, majf=0, minf=22017 00:22:09.161 IO depths : 1=1.2%, 2=2.3%, 4=4.7%, 8=9.3%, 16=18.6%, 32=37.2%, >=64=26.7% 00:22:09.161 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:09.161 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:22:09.161 issued rwts: total=86,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:09.161 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:09.161 job3: (groupid=0, jobs=1): err= 0: pid=3121052: Thu May 16 20:31:21 2024 00:22:09.161 read: IOPS=2, BW=2832KiB/s (2900kB/s)(39.0MiB/14102msec) 00:22:09.161 slat (usec): min=592, max=4201.3k, avg=307402.93, stdev=866282.49 00:22:09.161 clat (msec): min=2112, max=14099, avg=11450.58, stdev=3480.09 00:22:09.161 lat (msec): min=4219, max=14101, avg=11757.99, stdev=3147.12 00:22:09.161 clat percentiles (msec): 00:22:09.161 | 1.00th=[ 2106], 5.00th=[ 4212], 10.00th=[ 6409], 20.00th=[ 8490], 00:22:09.161 | 30.00th=[ 8557], 40.00th=[12818], 50.00th=[12818], 60.00th=[14026], 00:22:09.161 | 70.00th=[14026], 80.00th=[14026], 90.00th=[14160], 95.00th=[14160], 00:22:09.161 | 99.00th=[14160], 99.50th=[14160], 99.90th=[14160], 99.95th=[14160], 00:22:09.161 | 99.99th=[14160] 00:22:09.161 lat (msec) : >=2000=100.00% 00:22:09.161 cpu : usr=0.00%, sys=0.21%, ctx=65, majf=0, minf=9985 00:22:09.161 IO depths : 1=2.6%, 2=5.1%, 4=10.3%, 8=20.5%, 16=41.0%, 32=20.5%, >=64=0.0% 00:22:09.161 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:09.161 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:22:09.161 issued rwts: total=39,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:09.161 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:09.161 job3: (groupid=0, jobs=1): err= 0: pid=3121054: Thu May 16 20:31:21 2024 00:22:09.161 read: IOPS=9, BW=9293KiB/s (9516kB/s)(110MiB/12121msec) 00:22:09.161 slat (usec): min=727, max=2059.7k, avg=90949.84, stdev=395845.22 00:22:09.161 clat (msec): min=2115, max=12119, avg=8826.43, stdev=3584.07 00:22:09.161 lat (msec): min=2127, max=12120, avg=8917.38, stdev=3538.87 00:22:09.161 clat percentiles (msec): 00:22:09.161 | 1.00th=[ 2123], 5.00th=[ 2165], 10.00th=[ 2232], 20.00th=[ 4329], 00:22:09.161 | 30.00th=[ 6477], 40.00th=[ 8658], 50.00th=[10671], 60.00th=[11879], 00:22:09.161 | 70.00th=[12013], 80.00th=[12147], 90.00th=[12147], 95.00th=[12147], 00:22:09.161 | 99.00th=[12147], 99.50th=[12147], 99.90th=[12147], 99.95th=[12147], 00:22:09.161 | 99.99th=[12147] 00:22:09.161 lat (msec) : >=2000=100.00% 00:22:09.161 cpu : usr=0.00%, sys=0.73%, ctx=126, majf=0, minf=28161 00:22:09.161 IO depths : 1=0.9%, 2=1.8%, 4=3.6%, 8=7.3%, 16=14.5%, 32=29.1%, >=64=42.7% 00:22:09.161 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:09.161 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:22:09.161 issued rwts: total=110,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:09.161 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:09.161 job3: (groupid=0, jobs=1): err= 0: pid=3121055: Thu May 16 20:31:21 2024 00:22:09.161 read: IOPS=31, BW=31.7MiB/s (33.2MB/s)(445MiB/14056msec) 00:22:09.161 slat (usec): min=56, max=2100.0k, avg=26830.54, stdev=218055.41 00:22:09.161 clat (msec): min=206, max=13061, avg=3929.45, stdev=5216.42 00:22:09.161 lat (msec): min=206, max=13062, avg=3956.28, stdev=5232.28 00:22:09.161 clat percentiles (msec): 00:22:09.161 | 1.00th=[ 226], 5.00th=[ 247], 10.00th=[ 249], 20.00th=[ 264], 00:22:09.161 | 30.00th=[ 271], 40.00th=[ 426], 50.00th=[ 435], 60.00th=[ 447], 00:22:09.161 | 70.00th=[ 6342], 80.00th=[12818], 90.00th=[12953], 95.00th=[13087], 00:22:09.161 | 99.00th=[13087], 99.50th=[13087], 99.90th=[13087], 99.95th=[13087], 00:22:09.161 | 99.99th=[13087] 00:22:09.161 bw ( KiB/s): min= 2048, max=335872, per=2.87%, avg=72381.67, stdev=112525.83, samples=9 00:22:09.161 iops : min= 2, max= 328, avg=70.67, stdev=109.89, samples=9 00:22:09.161 lat (msec) : 250=11.69%, 500=51.69%, >=2000=36.63% 00:22:09.161 cpu : usr=0.01%, sys=0.78%, ctx=430, majf=0, minf=32769 00:22:09.161 IO depths : 1=0.2%, 2=0.4%, 4=0.9%, 8=1.8%, 16=3.6%, 32=7.2%, >=64=85.8% 00:22:09.161 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:09.161 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:22:09.161 issued rwts: total=445,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:09.161 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:09.161 job3: (groupid=0, jobs=1): err= 0: pid=3121056: Thu May 16 20:31:21 2024 00:22:09.161 read: IOPS=63, BW=63.9MiB/s (67.0MB/s)(772MiB/12076msec) 00:22:09.161 slat (usec): min=48, max=2060.1k, avg=15513.89, stdev=147216.05 00:22:09.161 clat (msec): min=95, max=9020, avg=1876.49, stdev=2964.31 00:22:09.161 lat (msec): min=393, max=9021, avg=1892.00, stdev=2973.15 00:22:09.161 clat percentiles (msec): 00:22:09.161 | 1.00th=[ 393], 5.00th=[ 397], 10.00th=[ 397], 20.00th=[ 401], 00:22:09.161 | 30.00th=[ 405], 40.00th=[ 409], 50.00th=[ 430], 60.00th=[ 447], 00:22:09.161 | 70.00th=[ 802], 80.00th=[ 1053], 90.00th=[ 8792], 95.00th=[ 8926], 00:22:09.161 | 99.00th=[ 8926], 99.50th=[ 9060], 99.90th=[ 9060], 99.95th=[ 9060], 00:22:09.161 | 99.99th=[ 9060] 00:22:09.161 bw ( KiB/s): min= 2048, max=331776, per=5.83%, avg=146727.44, stdev=147369.12, samples=9 00:22:09.161 iops : min= 2, max= 324, avg=143.22, stdev=143.87, samples=9 00:22:09.161 lat (msec) : 100=0.13%, 500=61.40%, 750=6.22%, 1000=9.59%, 2000=3.63% 00:22:09.161 lat (msec) : >=2000=19.04% 00:22:09.161 cpu : usr=0.03%, sys=1.04%, ctx=903, majf=0, minf=32769 00:22:09.161 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.0%, 16=2.1%, 32=4.1%, >=64=91.8% 00:22:09.161 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:09.161 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:22:09.161 issued rwts: total=772,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:09.161 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:09.161 job3: (groupid=0, jobs=1): err= 0: pid=3121057: Thu May 16 20:31:21 2024 00:22:09.161 read: IOPS=255, BW=255MiB/s (268MB/s)(2564MiB/10039msec) 00:22:09.162 slat (usec): min=49, max=2161.6k, avg=3907.05, stdev=43639.09 00:22:09.162 clat (msec): min=13, max=4638, avg=479.28, stdev=555.48 00:22:09.162 lat (msec): min=55, max=4645, avg=483.19, stdev=560.91 00:22:09.162 clat percentiles (msec): 00:22:09.162 | 1.00th=[ 92], 5.00th=[ 105], 10.00th=[ 105], 20.00th=[ 106], 00:22:09.162 | 30.00th=[ 243], 40.00th=[ 275], 50.00th=[ 359], 60.00th=[ 435], 00:22:09.162 | 70.00th=[ 489], 80.00th=[ 550], 90.00th=[ 793], 95.00th=[ 969], 00:22:09.162 | 99.00th=[ 2702], 99.50th=[ 2735], 99.90th=[ 4597], 99.95th=[ 4665], 00:22:09.162 | 99.99th=[ 4665] 00:22:09.162 bw ( KiB/s): min=32768, max=1114112, per=12.62%, avg=317778.47, stdev=253349.87, samples=15 00:22:09.162 iops : min= 32, max= 1088, avg=310.27, stdev=247.35, samples=15 00:22:09.162 lat (msec) : 20=0.04%, 100=3.47%, 250=34.79%, 500=33.00%, 750=17.82% 00:22:09.162 lat (msec) : 1000=5.89%, >=2000=4.99% 00:22:09.162 cpu : usr=0.10%, sys=2.79%, ctx=3335, majf=0, minf=32769 00:22:09.162 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.6%, 32=1.2%, >=64=97.5% 00:22:09.162 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:09.162 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:09.162 issued rwts: total=2564,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:09.162 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:09.162 job3: (groupid=0, jobs=1): err= 0: pid=3121058: Thu May 16 20:31:21 2024 00:22:09.162 read: IOPS=6, BW=6726KiB/s (6887kB/s)(93.0MiB/14159msec) 00:22:09.162 slat (usec): min=667, max=2062.5k, avg=129291.13, stdev=472836.11 00:22:09.162 clat (msec): min=2134, max=14158, avg=10873.88, stdev=3467.67 00:22:09.162 lat (msec): min=4173, max=14158, avg=11003.18, stdev=3360.79 00:22:09.162 clat percentiles (msec): 00:22:09.162 | 1.00th=[ 2140], 5.00th=[ 4245], 10.00th=[ 6342], 20.00th=[ 6409], 00:22:09.162 | 30.00th=[ 8557], 40.00th=[10671], 50.00th=[12818], 60.00th=[12818], 00:22:09.162 | 70.00th=[14026], 80.00th=[14026], 90.00th=[14160], 95.00th=[14160], 00:22:09.162 | 99.00th=[14160], 99.50th=[14160], 99.90th=[14160], 99.95th=[14160], 00:22:09.162 | 99.99th=[14160] 00:22:09.162 lat (msec) : >=2000=100.00% 00:22:09.162 cpu : usr=0.00%, sys=0.52%, ctx=106, majf=0, minf=23809 00:22:09.162 IO depths : 1=1.1%, 2=2.2%, 4=4.3%, 8=8.6%, 16=17.2%, 32=34.4%, >=64=32.3% 00:22:09.162 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:09.162 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:22:09.162 issued rwts: total=93,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:09.162 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:09.162 job4: (groupid=0, jobs=1): err= 0: pid=3121068: Thu May 16 20:31:21 2024 00:22:09.162 read: IOPS=2, BW=2897KiB/s (2967kB/s)(34.0MiB/12017msec) 00:22:09.162 slat (msec): min=7, max=2065, avg=350.11, stdev=740.07 00:22:09.162 clat (msec): min=112, max=12006, avg=6759.87, stdev=3654.08 00:22:09.162 lat (msec): min=2135, max=12016, avg=7109.97, stdev=3567.12 00:22:09.162 clat percentiles (msec): 00:22:09.162 | 1.00th=[ 113], 5.00th=[ 2140], 10.00th=[ 2165], 20.00th=[ 2232], 00:22:09.162 | 30.00th=[ 4329], 40.00th=[ 4396], 50.00th=[ 6477], 60.00th=[ 8658], 00:22:09.162 | 70.00th=[ 8658], 80.00th=[10805], 90.00th=[11879], 95.00th=[12013], 00:22:09.162 | 99.00th=[12013], 99.50th=[12013], 99.90th=[12013], 99.95th=[12013], 00:22:09.162 | 99.99th=[12013] 00:22:09.162 lat (msec) : 250=2.94%, >=2000=97.06% 00:22:09.162 cpu : usr=0.00%, sys=0.23%, ctx=78, majf=0, minf=8705 00:22:09.162 IO depths : 1=2.9%, 2=5.9%, 4=11.8%, 8=23.5%, 16=47.1%, 32=8.8%, >=64=0.0% 00:22:09.162 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:09.162 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:22:09.162 issued rwts: total=34,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:09.162 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:09.162 job4: (groupid=0, jobs=1): err= 0: pid=3121069: Thu May 16 20:31:21 2024 00:22:09.162 read: IOPS=2, BW=2106KiB/s (2156kB/s)(29.0MiB/14101msec) 00:22:09.162 slat (usec): min=732, max=2108.4k, avg=413126.40, stdev=802625.95 00:22:09.162 clat (msec): min=2119, max=14099, avg=11780.30, stdev=3649.87 00:22:09.162 lat (msec): min=4198, max=14100, avg=12193.43, stdev=3162.93 00:22:09.162 clat percentiles (msec): 00:22:09.162 | 1.00th=[ 2123], 5.00th=[ 4212], 10.00th=[ 4245], 20.00th=[ 8557], 00:22:09.162 | 30.00th=[10671], 40.00th=[12818], 50.00th=[14026], 60.00th=[14026], 00:22:09.162 | 70.00th=[14026], 80.00th=[14160], 90.00th=[14160], 95.00th=[14160], 00:22:09.162 | 99.00th=[14160], 99.50th=[14160], 99.90th=[14160], 99.95th=[14160], 00:22:09.162 | 99.99th=[14160] 00:22:09.162 lat (msec) : >=2000=100.00% 00:22:09.162 cpu : usr=0.00%, sys=0.16%, ctx=72, majf=0, minf=7425 00:22:09.162 IO depths : 1=3.4%, 2=6.9%, 4=13.8%, 8=27.6%, 16=48.3%, 32=0.0%, >=64=0.0% 00:22:09.162 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:09.162 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:22:09.162 issued rwts: total=29,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:09.162 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:09.162 job4: (groupid=0, jobs=1): err= 0: pid=3121070: Thu May 16 20:31:21 2024 00:22:09.162 read: IOPS=44, BW=44.2MiB/s (46.3MB/s)(533MiB/12058msec) 00:22:09.162 slat (usec): min=374, max=2114.4k, avg=22391.36, stdev=185297.12 00:22:09.162 clat (msec): min=120, max=8647, avg=1531.62, stdev=1811.10 00:22:09.162 lat (msec): min=414, max=10761, avg=1554.02, stdev=1850.47 00:22:09.162 clat percentiles (msec): 00:22:09.162 | 1.00th=[ 414], 5.00th=[ 422], 10.00th=[ 430], 20.00th=[ 439], 00:22:09.162 | 30.00th=[ 451], 40.00th=[ 477], 50.00th=[ 514], 60.00th=[ 550], 00:22:09.162 | 70.00th=[ 592], 80.00th=[ 4463], 90.00th=[ 4597], 95.00th=[ 4732], 00:22:09.162 | 99.00th=[ 6074], 99.50th=[ 8658], 99.90th=[ 8658], 99.95th=[ 8658], 00:22:09.162 | 99.99th=[ 8658] 00:22:09.162 bw ( KiB/s): min=23906, max=286720, per=6.58%, avg=165754.00, stdev=114921.97, samples=5 00:22:09.162 iops : min= 23, max= 280, avg=161.80, stdev=112.34, samples=5 00:22:09.162 lat (msec) : 250=0.19%, 500=47.65%, 750=25.14%, >=2000=27.02% 00:22:09.162 cpu : usr=0.02%, sys=0.90%, ctx=930, majf=0, minf=32769 00:22:09.162 IO depths : 1=0.2%, 2=0.4%, 4=0.8%, 8=1.5%, 16=3.0%, 32=6.0%, >=64=88.2% 00:22:09.162 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:09.162 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:22:09.162 issued rwts: total=533,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:09.162 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:09.162 job4: (groupid=0, jobs=1): err= 0: pid=3121072: Thu May 16 20:31:21 2024 00:22:09.162 read: IOPS=6, BW=7018KiB/s (7186kB/s)(83.0MiB/12111msec) 00:22:09.162 slat (usec): min=723, max=2088.7k, avg=120516.98, stdev=452755.81 00:22:09.162 clat (msec): min=2107, max=12109, avg=10119.71, stdev=3320.66 00:22:09.162 lat (msec): min=2136, max=12110, avg=10240.23, stdev=3205.85 00:22:09.162 clat percentiles (msec): 00:22:09.162 | 1.00th=[ 2106], 5.00th=[ 2198], 10.00th=[ 4329], 20.00th=[ 6477], 00:22:09.162 | 30.00th=[10805], 40.00th=[11879], 50.00th=[12013], 60.00th=[12013], 00:22:09.162 | 70.00th=[12013], 80.00th=[12147], 90.00th=[12147], 95.00th=[12147], 00:22:09.162 | 99.00th=[12147], 99.50th=[12147], 99.90th=[12147], 99.95th=[12147], 00:22:09.162 | 99.99th=[12147] 00:22:09.162 lat (msec) : >=2000=100.00% 00:22:09.162 cpu : usr=0.00%, sys=0.56%, ctx=131, majf=0, minf=21249 00:22:09.162 IO depths : 1=1.2%, 2=2.4%, 4=4.8%, 8=9.6%, 16=19.3%, 32=38.6%, >=64=24.1% 00:22:09.162 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:09.162 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:22:09.162 issued rwts: total=83,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:09.162 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:09.162 job4: (groupid=0, jobs=1): err= 0: pid=3121073: Thu May 16 20:31:21 2024 00:22:09.162 read: IOPS=143, BW=144MiB/s (151MB/s)(1446MiB/10064msec) 00:22:09.162 slat (usec): min=41, max=2092.6k, avg=6913.06, stdev=63439.51 00:22:09.162 clat (msec): min=61, max=3809, avg=659.99, stdev=499.77 00:22:09.162 lat (msec): min=64, max=3812, avg=666.90, stdev=506.54 00:22:09.162 clat percentiles (msec): 00:22:09.162 | 1.00th=[ 155], 5.00th=[ 443], 10.00th=[ 485], 20.00th=[ 493], 00:22:09.162 | 30.00th=[ 502], 40.00th=[ 510], 50.00th=[ 527], 60.00th=[ 542], 00:22:09.162 | 70.00th=[ 609], 80.00th=[ 743], 90.00th=[ 877], 95.00th=[ 902], 00:22:09.162 | 99.00th=[ 3775], 99.50th=[ 3775], 99.90th=[ 3809], 99.95th=[ 3809], 00:22:09.162 | 99.99th=[ 3809] 00:22:09.162 bw ( KiB/s): min=118546, max=280576, per=8.24%, avg=207504.69, stdev=56190.57, samples=13 00:22:09.162 iops : min= 115, max= 274, avg=202.54, stdev=54.99, samples=13 00:22:09.162 lat (msec) : 100=0.28%, 250=1.59%, 500=24.41%, 750=54.70%, 1000=16.04% 00:22:09.162 lat (msec) : >=2000=2.97% 00:22:09.162 cpu : usr=0.04%, sys=1.88%, ctx=1317, majf=0, minf=32769 00:22:09.162 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.6%, 16=1.1%, 32=2.2%, >=64=95.6% 00:22:09.162 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:09.162 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:09.162 issued rwts: total=1446,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:09.162 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:09.162 job4: (groupid=0, jobs=1): err= 0: pid=3121074: Thu May 16 20:31:21 2024 00:22:09.162 read: IOPS=36, BW=36.9MiB/s (38.7MB/s)(444MiB/12031msec) 00:22:09.162 slat (usec): min=141, max=2103.9k, avg=26819.79, stdev=200298.33 00:22:09.162 clat (msec): min=120, max=6123, avg=1970.35, stdev=1898.19 00:22:09.162 lat (msec): min=521, max=6131, avg=1997.17, stdev=1908.91 00:22:09.162 clat percentiles (msec): 00:22:09.162 | 1.00th=[ 523], 5.00th=[ 542], 10.00th=[ 558], 20.00th=[ 592], 00:22:09.162 | 30.00th=[ 634], 40.00th=[ 659], 50.00th=[ 684], 60.00th=[ 735], 00:22:09.162 | 70.00th=[ 4396], 80.00th=[ 4665], 90.00th=[ 4866], 95.00th=[ 4933], 00:22:09.162 | 99.00th=[ 5000], 99.50th=[ 6141], 99.90th=[ 6141], 99.95th=[ 6141], 00:22:09.162 | 99.99th=[ 6141] 00:22:09.162 bw ( KiB/s): min= 6119, max=215040, per=5.14%, avg=129428.60, stdev=93707.30, samples=5 00:22:09.162 iops : min= 5, max= 210, avg=126.20, stdev=91.83, samples=5 00:22:09.162 lat (msec) : 250=0.23%, 750=62.16%, 1000=3.38%, >=2000=34.23% 00:22:09.162 cpu : usr=0.00%, sys=1.03%, ctx=900, majf=0, minf=32769 00:22:09.162 IO depths : 1=0.2%, 2=0.5%, 4=0.9%, 8=1.8%, 16=3.6%, 32=7.2%, >=64=85.8% 00:22:09.162 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:09.162 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:22:09.162 issued rwts: total=444,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:09.162 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:09.162 job4: (groupid=0, jobs=1): err= 0: pid=3121075: Thu May 16 20:31:21 2024 00:22:09.162 read: IOPS=3, BW=3910KiB/s (4004kB/s)(46.0MiB/12048msec) 00:22:09.162 slat (usec): min=596, max=2059.0k, avg=259252.38, stdev=647102.94 00:22:09.162 clat (msec): min=122, max=12046, avg=7200.54, stdev=3724.42 00:22:09.162 lat (msec): min=2122, max=12047, avg=7459.79, stdev=3634.73 00:22:09.162 clat percentiles (msec): 00:22:09.162 | 1.00th=[ 123], 5.00th=[ 2140], 10.00th=[ 2198], 20.00th=[ 2265], 00:22:09.162 | 30.00th=[ 4396], 40.00th=[ 6477], 50.00th=[ 6544], 60.00th=[ 8658], 00:22:09.163 | 70.00th=[10805], 80.00th=[10805], 90.00th=[12013], 95.00th=[12013], 00:22:09.163 | 99.00th=[12013], 99.50th=[12013], 99.90th=[12013], 99.95th=[12013], 00:22:09.163 | 99.99th=[12013] 00:22:09.163 lat (msec) : 250=2.17%, >=2000=97.83% 00:22:09.163 cpu : usr=0.00%, sys=0.31%, ctx=90, majf=0, minf=11777 00:22:09.163 IO depths : 1=2.2%, 2=4.3%, 4=8.7%, 8=17.4%, 16=34.8%, 32=32.6%, >=64=0.0% 00:22:09.163 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:09.163 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:22:09.163 issued rwts: total=46,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:09.163 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:09.163 job4: (groupid=0, jobs=1): err= 0: pid=3121076: Thu May 16 20:31:21 2024 00:22:09.163 read: IOPS=72, BW=72.7MiB/s (76.2MB/s)(879MiB/12093msec) 00:22:09.163 slat (usec): min=45, max=2060.9k, avg=13614.83, stdev=119647.10 00:22:09.163 clat (msec): min=118, max=4866, avg=1634.94, stdev=1385.86 00:22:09.163 lat (msec): min=483, max=4873, avg=1648.55, stdev=1387.29 00:22:09.163 clat percentiles (msec): 00:22:09.163 | 1.00th=[ 542], 5.00th=[ 600], 10.00th=[ 625], 20.00th=[ 625], 00:22:09.163 | 30.00th=[ 684], 40.00th=[ 776], 50.00th=[ 860], 60.00th=[ 986], 00:22:09.163 | 70.00th=[ 1636], 80.00th=[ 2769], 90.00th=[ 4463], 95.00th=[ 4597], 00:22:09.163 | 99.00th=[ 4799], 99.50th=[ 4799], 99.90th=[ 4866], 99.95th=[ 4866], 00:22:09.163 | 99.99th=[ 4866] 00:22:09.163 bw ( KiB/s): min= 2043, max=212992, per=5.10%, avg=128319.75, stdev=78868.30, samples=12 00:22:09.163 iops : min= 1, max= 208, avg=125.17, stdev=77.17, samples=12 00:22:09.163 lat (msec) : 250=0.11%, 500=0.34%, 750=36.18%, 1000=23.78%, 2000=10.13% 00:22:09.163 lat (msec) : >=2000=29.47% 00:22:09.163 cpu : usr=0.07%, sys=1.55%, ctx=1070, majf=0, minf=32769 00:22:09.163 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=0.9%, 16=1.8%, 32=3.6%, >=64=92.8% 00:22:09.163 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:09.163 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:09.163 issued rwts: total=879,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:09.163 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:09.163 job4: (groupid=0, jobs=1): err= 0: pid=3121077: Thu May 16 20:31:21 2024 00:22:09.163 read: IOPS=34, BW=34.3MiB/s (36.0MB/s)(483MiB/14085msec) 00:22:09.163 slat (usec): min=47, max=2086.2k, avg=24767.60, stdev=201103.14 00:22:09.163 clat (msec): min=254, max=12794, avg=2809.30, stdev=4111.32 00:22:09.163 lat (msec): min=255, max=13946, avg=2834.07, stdev=4138.41 00:22:09.163 clat percentiles (msec): 00:22:09.163 | 1.00th=[ 255], 5.00th=[ 257], 10.00th=[ 257], 20.00th=[ 257], 00:22:09.163 | 30.00th=[ 259], 40.00th=[ 259], 50.00th=[ 262], 60.00th=[ 266], 00:22:09.163 | 70.00th=[ 338], 80.00th=[ 9731], 90.00th=[ 9866], 95.00th=[ 9866], 00:22:09.163 | 99.00th=[ 9866], 99.50th=[ 9866], 99.90th=[12818], 99.95th=[12818], 00:22:09.163 | 99.99th=[12818] 00:22:09.163 bw ( KiB/s): min= 2048, max=399360, per=4.83%, avg=121514.67, stdev=181885.77, samples=6 00:22:09.163 iops : min= 2, max= 390, avg=118.67, stdev=177.62, samples=6 00:22:09.163 lat (msec) : 500=70.39%, 2000=0.62%, >=2000=28.99% 00:22:09.163 cpu : usr=0.01%, sys=0.60%, ctx=454, majf=0, minf=32769 00:22:09.163 IO depths : 1=0.2%, 2=0.4%, 4=0.8%, 8=1.7%, 16=3.3%, 32=6.6%, >=64=87.0% 00:22:09.163 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:09.163 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:22:09.163 issued rwts: total=483,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:09.163 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:09.163 job4: (groupid=0, jobs=1): err= 0: pid=3121078: Thu May 16 20:31:21 2024 00:22:09.163 read: IOPS=4, BW=4229KiB/s (4330kB/s)(50.0MiB/12108msec) 00:22:09.163 slat (usec): min=752, max=2079.3k, avg=200105.66, stdev=577600.17 00:22:09.163 clat (msec): min=2101, max=12105, avg=9075.74, stdev=3673.03 00:22:09.163 lat (msec): min=2143, max=12107, avg=9275.84, stdev=3556.02 00:22:09.163 clat percentiles (msec): 00:22:09.163 | 1.00th=[ 2106], 5.00th=[ 2198], 10.00th=[ 2232], 20.00th=[ 4396], 00:22:09.163 | 30.00th=[ 6477], 40.00th=[ 8658], 50.00th=[10805], 60.00th=[12013], 00:22:09.163 | 70.00th=[12013], 80.00th=[12147], 90.00th=[12147], 95.00th=[12147], 00:22:09.163 | 99.00th=[12147], 99.50th=[12147], 99.90th=[12147], 99.95th=[12147], 00:22:09.163 | 99.99th=[12147] 00:22:09.163 lat (msec) : >=2000=100.00% 00:22:09.163 cpu : usr=0.00%, sys=0.35%, ctx=98, majf=0, minf=12801 00:22:09.163 IO depths : 1=2.0%, 2=4.0%, 4=8.0%, 8=16.0%, 16=32.0%, 32=38.0%, >=64=0.0% 00:22:09.163 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:09.163 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:22:09.163 issued rwts: total=50,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:09.163 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:09.163 job4: (groupid=0, jobs=1): err= 0: pid=3121079: Thu May 16 20:31:21 2024 00:22:09.163 read: IOPS=4, BW=4139KiB/s (4238kB/s)(49.0MiB/12123msec) 00:22:09.163 slat (usec): min=699, max=2148.2k, avg=245027.99, stdev=643356.04 00:22:09.163 clat (msec): min=115, max=12119, avg=10321.43, stdev=3184.93 00:22:09.163 lat (msec): min=2157, max=12122, avg=10566.46, stdev=2824.94 00:22:09.163 clat percentiles (msec): 00:22:09.163 | 1.00th=[ 116], 5.00th=[ 2198], 10.00th=[ 4329], 20.00th=[ 8557], 00:22:09.163 | 30.00th=[10805], 40.00th=[11879], 50.00th=[12013], 60.00th=[12013], 00:22:09.163 | 70.00th=[12147], 80.00th=[12147], 90.00th=[12147], 95.00th=[12147], 00:22:09.163 | 99.00th=[12147], 99.50th=[12147], 99.90th=[12147], 99.95th=[12147], 00:22:09.163 | 99.99th=[12147] 00:22:09.163 lat (msec) : 250=2.04%, >=2000=97.96% 00:22:09.163 cpu : usr=0.02%, sys=0.30%, ctx=96, majf=0, minf=12545 00:22:09.163 IO depths : 1=2.0%, 2=4.1%, 4=8.2%, 8=16.3%, 16=32.7%, 32=36.7%, >=64=0.0% 00:22:09.163 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:09.163 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:22:09.163 issued rwts: total=49,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:09.163 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:09.163 job4: (groupid=0, jobs=1): err= 0: pid=3121080: Thu May 16 20:31:21 2024 00:22:09.163 read: IOPS=0, BW=1020KiB/s (1045kB/s)(14.0MiB/14049msec) 00:22:09.163 slat (msec): min=7, max=2130, avg=852.03, stdev=1000.70 00:22:09.163 clat (msec): min=2120, max=13954, avg=8539.76, stdev=4024.14 00:22:09.163 lat (msec): min=4198, max=14048, avg=9391.80, stdev=3817.86 00:22:09.163 clat percentiles (msec): 00:22:09.163 | 1.00th=[ 2123], 5.00th=[ 2123], 10.00th=[ 4212], 20.00th=[ 4245], 00:22:09.163 | 30.00th=[ 6342], 40.00th=[ 6409], 50.00th=[ 8490], 60.00th=[10671], 00:22:09.163 | 70.00th=[10671], 80.00th=[12818], 90.00th=[13892], 95.00th=[13892], 00:22:09.163 | 99.00th=[13892], 99.50th=[13892], 99.90th=[13892], 99.95th=[13892], 00:22:09.163 | 99.99th=[13892] 00:22:09.163 lat (msec) : >=2000=100.00% 00:22:09.163 cpu : usr=0.00%, sys=0.09%, ctx=64, majf=0, minf=3585 00:22:09.163 IO depths : 1=7.1%, 2=14.3%, 4=28.6%, 8=50.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:09.163 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:09.163 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=100.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:09.163 issued rwts: total=14,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:09.163 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:09.163 job4: (groupid=0, jobs=1): err= 0: pid=3121081: Thu May 16 20:31:21 2024 00:22:09.163 read: IOPS=154, BW=154MiB/s (162MB/s)(1863MiB/12087msec) 00:22:09.163 slat (usec): min=39, max=2135.3k, avg=6423.44, stdev=89976.90 00:22:09.163 clat (msec): min=115, max=8643, avg=771.72, stdev=1383.77 00:22:09.163 lat (msec): min=115, max=8674, avg=778.15, stdev=1393.54 00:22:09.163 clat percentiles (msec): 00:22:09.163 | 1.00th=[ 116], 5.00th=[ 116], 10.00th=[ 117], 20.00th=[ 117], 00:22:09.163 | 30.00th=[ 118], 40.00th=[ 120], 50.00th=[ 142], 60.00th=[ 232], 00:22:09.163 | 70.00th=[ 241], 80.00th=[ 651], 90.00th=[ 3910], 95.00th=[ 4463], 00:22:09.163 | 99.00th=[ 4597], 99.50th=[ 4597], 99.90th=[ 6544], 99.95th=[ 8658], 00:22:09.163 | 99.99th=[ 8658] 00:22:09.163 bw ( KiB/s): min= 2048, max=1056768, per=14.12%, avg=355501.90, stdev=406561.50, samples=10 00:22:09.163 iops : min= 2, max= 1032, avg=347.10, stdev=397.10, samples=10 00:22:09.163 lat (msec) : 250=75.58%, 500=4.03%, 750=1.13%, 1000=1.50%, 2000=3.81% 00:22:09.163 lat (msec) : >=2000=13.96% 00:22:09.163 cpu : usr=0.02%, sys=1.70%, ctx=2025, majf=0, minf=32769 00:22:09.163 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.9%, 32=1.7%, >=64=96.6% 00:22:09.163 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:09.163 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:09.163 issued rwts: total=1863,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:09.163 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:09.163 job5: (groupid=0, jobs=1): err= 0: pid=3121087: Thu May 16 20:31:21 2024 00:22:09.163 read: IOPS=4, BW=5010KiB/s (5130kB/s)(49.0MiB/10016msec) 00:22:09.163 slat (usec): min=1610, max=2067.8k, avg=204089.34, stdev=578401.61 00:22:09.163 clat (msec): min=14, max=10012, avg=5270.55, stdev=3837.64 00:22:09.163 lat (msec): min=17, max=10015, avg=5474.64, stdev=3818.17 00:22:09.163 clat percentiles (msec): 00:22:09.163 | 1.00th=[ 15], 5.00th=[ 21], 10.00th=[ 26], 20.00th=[ 148], 00:22:09.163 | 30.00th=[ 2265], 40.00th=[ 4396], 50.00th=[ 4463], 60.00th=[ 6611], 00:22:09.163 | 70.00th=[ 8792], 80.00th=[ 9866], 90.00th=[10000], 95.00th=[10000], 00:22:09.163 | 99.00th=[10000], 99.50th=[10000], 99.90th=[10000], 99.95th=[10000], 00:22:09.163 | 99.99th=[10000] 00:22:09.163 lat (msec) : 20=4.08%, 50=6.12%, 100=4.08%, 250=8.16%, >=2000=77.55% 00:22:09.163 cpu : usr=0.00%, sys=0.38%, ctx=122, majf=0, minf=12545 00:22:09.163 IO depths : 1=2.0%, 2=4.1%, 4=8.2%, 8=16.3%, 16=32.7%, 32=36.7%, >=64=0.0% 00:22:09.163 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:09.163 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:22:09.163 issued rwts: total=49,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:09.163 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:09.163 job5: (groupid=0, jobs=1): err= 0: pid=3121088: Thu May 16 20:31:21 2024 00:22:09.163 read: IOPS=182, BW=182MiB/s (191MB/s)(1838MiB/10077msec) 00:22:09.163 slat (usec): min=48, max=82840, avg=5434.35, stdev=8664.90 00:22:09.163 clat (msec): min=74, max=1381, avg=658.94, stdev=244.27 00:22:09.163 lat (msec): min=82, max=1384, avg=664.38, stdev=245.95 00:22:09.163 clat percentiles (msec): 00:22:09.163 | 1.00th=[ 188], 5.00th=[ 451], 10.00th=[ 489], 20.00th=[ 498], 00:22:09.163 | 30.00th=[ 510], 40.00th=[ 523], 50.00th=[ 550], 60.00th=[ 600], 00:22:09.163 | 70.00th=[ 718], 80.00th=[ 844], 90.00th=[ 1053], 95.00th=[ 1217], 00:22:09.163 | 99.00th=[ 1334], 99.50th=[ 1368], 99.90th=[ 1385], 99.95th=[ 1385], 00:22:09.163 | 99.99th=[ 1385] 00:22:09.163 bw ( KiB/s): min=69493, max=290816, per=7.73%, avg=194652.00, stdev=64481.97, samples=18 00:22:09.163 iops : min= 67, max= 284, avg=189.94, stdev=63.09, samples=18 00:22:09.163 lat (msec) : 100=0.22%, 250=1.25%, 500=19.86%, 750=52.07%, 1000=15.23% 00:22:09.163 lat (msec) : 2000=11.37% 00:22:09.164 cpu : usr=0.12%, sys=3.26%, ctx=1756, majf=0, minf=32769 00:22:09.164 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.9%, 32=1.7%, >=64=96.6% 00:22:09.164 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:09.164 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:09.164 issued rwts: total=1838,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:09.164 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:09.164 job5: (groupid=0, jobs=1): err= 0: pid=3121089: Thu May 16 20:31:21 2024 00:22:09.164 read: IOPS=241, BW=241MiB/s (253MB/s)(2916MiB/12078msec) 00:22:09.164 slat (usec): min=46, max=2104.0k, avg=4097.82, stdev=66070.58 00:22:09.164 clat (msec): min=101, max=4611, avg=509.32, stdev=971.26 00:22:09.164 lat (msec): min=101, max=4613, avg=513.42, stdev=974.74 00:22:09.164 clat percentiles (msec): 00:22:09.164 | 1.00th=[ 124], 5.00th=[ 125], 10.00th=[ 125], 20.00th=[ 126], 00:22:09.164 | 30.00th=[ 127], 40.00th=[ 201], 50.00th=[ 220], 60.00th=[ 234], 00:22:09.164 | 70.00th=[ 241], 80.00th=[ 251], 90.00th=[ 743], 95.00th=[ 2467], 00:22:09.164 | 99.00th=[ 4530], 99.50th=[ 4530], 99.90th=[ 4597], 99.95th=[ 4597], 00:22:09.164 | 99.99th=[ 4597] 00:22:09.164 bw ( KiB/s): min= 2048, max=1044480, per=17.45%, avg=439324.38, stdev=327221.37, samples=13 00:22:09.164 iops : min= 2, max= 1020, avg=428.92, stdev=319.66, samples=13 00:22:09.164 lat (msec) : 250=79.32%, 500=5.97%, 750=4.84%, 1000=0.10%, 2000=0.51% 00:22:09.164 lat (msec) : >=2000=9.26% 00:22:09.164 cpu : usr=0.01%, sys=1.98%, ctx=2756, majf=0, minf=32769 00:22:09.164 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.1%, >=64=97.8% 00:22:09.164 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:09.164 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:09.164 issued rwts: total=2916,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:09.164 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:09.164 job5: (groupid=0, jobs=1): err= 0: pid=3121090: Thu May 16 20:31:21 2024 00:22:09.164 read: IOPS=92, BW=92.6MiB/s (97.1MB/s)(1119MiB/12080msec) 00:22:09.164 slat (usec): min=373, max=2099.5k, avg=10679.13, stdev=105952.94 00:22:09.164 clat (msec): min=125, max=6794, avg=1303.45, stdev=1923.76 00:22:09.164 lat (msec): min=241, max=6796, avg=1314.13, stdev=1929.14 00:22:09.164 clat percentiles (msec): 00:22:09.164 | 1.00th=[ 243], 5.00th=[ 253], 10.00th=[ 288], 20.00th=[ 317], 00:22:09.164 | 30.00th=[ 351], 40.00th=[ 380], 50.00th=[ 567], 60.00th=[ 919], 00:22:09.164 | 70.00th=[ 1011], 80.00th=[ 1083], 90.00th=[ 6611], 95.00th=[ 6678], 00:22:09.164 | 99.00th=[ 6745], 99.50th=[ 6812], 99.90th=[ 6812], 99.95th=[ 6812], 00:22:09.164 | 99.99th=[ 6812] 00:22:09.164 bw ( KiB/s): min= 2048, max=468992, per=6.21%, avg=156229.31, stdev=156792.78, samples=13 00:22:09.164 iops : min= 2, max= 458, avg=152.38, stdev=153.20, samples=13 00:22:09.164 lat (msec) : 250=4.47%, 500=43.88%, 750=3.40%, 1000=16.44%, 2000=19.93% 00:22:09.164 lat (msec) : >=2000=11.89% 00:22:09.164 cpu : usr=0.02%, sys=1.49%, ctx=2676, majf=0, minf=32769 00:22:09.164 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.7%, 16=1.4%, 32=2.9%, >=64=94.4% 00:22:09.164 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:09.164 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:09.164 issued rwts: total=1119,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:09.164 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:09.164 job5: (groupid=0, jobs=1): err= 0: pid=3121091: Thu May 16 20:31:21 2024 00:22:09.164 read: IOPS=4, BW=4924KiB/s (5042kB/s)(58.0MiB/12061msec) 00:22:09.164 slat (usec): min=604, max=2058.0k, avg=205786.93, stdev=583629.08 00:22:09.164 clat (msec): min=124, max=12024, avg=8407.68, stdev=3748.32 00:22:09.164 lat (msec): min=2137, max=12060, avg=8613.46, stdev=3610.72 00:22:09.164 clat percentiles (msec): 00:22:09.164 | 1.00th=[ 126], 5.00th=[ 2165], 10.00th=[ 2265], 20.00th=[ 4329], 00:22:09.164 | 30.00th=[ 6477], 40.00th=[ 8658], 50.00th=[ 8658], 60.00th=[10805], 00:22:09.164 | 70.00th=[11879], 80.00th=[12013], 90.00th=[12013], 95.00th=[12013], 00:22:09.164 | 99.00th=[12013], 99.50th=[12013], 99.90th=[12013], 99.95th=[12013], 00:22:09.164 | 99.99th=[12013] 00:22:09.164 lat (msec) : 250=1.72%, >=2000=98.28% 00:22:09.164 cpu : usr=0.00%, sys=0.39%, ctx=98, majf=0, minf=14849 00:22:09.164 IO depths : 1=1.7%, 2=3.4%, 4=6.9%, 8=13.8%, 16=27.6%, 32=46.6%, >=64=0.0% 00:22:09.164 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:09.164 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:22:09.164 issued rwts: total=58,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:09.164 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:09.164 job5: (groupid=0, jobs=1): err= 0: pid=3121092: Thu May 16 20:31:21 2024 00:22:09.164 read: IOPS=62, BW=62.7MiB/s (65.7MB/s)(760MiB/12130msec) 00:22:09.164 slat (usec): min=650, max=2044.9k, avg=15798.96, stdev=133551.91 00:22:09.164 clat (msec): min=119, max=6978, avg=1751.94, stdev=2225.86 00:22:09.164 lat (msec): min=322, max=6982, avg=1767.74, stdev=2232.55 00:22:09.164 clat percentiles (msec): 00:22:09.164 | 1.00th=[ 321], 5.00th=[ 330], 10.00th=[ 338], 20.00th=[ 393], 00:22:09.164 | 30.00th=[ 443], 40.00th=[ 481], 50.00th=[ 709], 60.00th=[ 1116], 00:22:09.164 | 70.00th=[ 1267], 80.00th=[ 2500], 90.00th=[ 6678], 95.00th=[ 6812], 00:22:09.164 | 99.00th=[ 6946], 99.50th=[ 6946], 99.90th=[ 6946], 99.95th=[ 6946], 00:22:09.164 | 99.99th=[ 6946] 00:22:09.164 bw ( KiB/s): min= 2043, max=356352, per=5.15%, avg=129637.90, stdev=127726.92, samples=10 00:22:09.164 iops : min= 1, max= 348, avg=126.50, stdev=124.84, samples=10 00:22:09.164 lat (msec) : 250=0.13%, 500=43.55%, 750=8.55%, 1000=3.82%, 2000=22.76% 00:22:09.164 lat (msec) : >=2000=21.18% 00:22:09.164 cpu : usr=0.04%, sys=1.16%, ctx=2325, majf=0, minf=32769 00:22:09.164 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.1%, 16=2.1%, 32=4.2%, >=64=91.7% 00:22:09.164 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:09.164 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:22:09.164 issued rwts: total=760,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:09.164 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:09.164 job5: (groupid=0, jobs=1): err= 0: pid=3121093: Thu May 16 20:31:21 2024 00:22:09.164 read: IOPS=92, BW=92.3MiB/s (96.7MB/s)(924MiB/10015msec) 00:22:09.164 slat (usec): min=613, max=2067.0k, avg=10818.28, stdev=95409.92 00:22:09.164 clat (msec): min=14, max=4851, avg=1272.03, stdev=1385.78 00:22:09.164 lat (msec): min=16, max=4854, avg=1282.85, stdev=1388.90 00:22:09.164 clat percentiles (msec): 00:22:09.164 | 1.00th=[ 32], 5.00th=[ 326], 10.00th=[ 334], 20.00th=[ 388], 00:22:09.164 | 30.00th=[ 430], 40.00th=[ 575], 50.00th=[ 827], 60.00th=[ 961], 00:22:09.164 | 70.00th=[ 1150], 80.00th=[ 1418], 90.00th=[ 4597], 95.00th=[ 4732], 00:22:09.164 | 99.00th=[ 4866], 99.50th=[ 4866], 99.90th=[ 4866], 99.95th=[ 4866], 00:22:09.164 | 99.99th=[ 4866] 00:22:09.164 bw ( KiB/s): min=14336, max=356352, per=5.89%, avg=148333.27, stdev=117549.67, samples=11 00:22:09.164 iops : min= 14, max= 348, avg=144.73, stdev=114.77, samples=11 00:22:09.164 lat (msec) : 20=0.43%, 50=0.97%, 100=0.22%, 250=0.43%, 500=36.15% 00:22:09.164 lat (msec) : 750=6.39%, 1000=17.75%, 2000=23.05%, >=2000=14.61% 00:22:09.164 cpu : usr=0.05%, sys=1.78%, ctx=2596, majf=0, minf=32769 00:22:09.164 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.9%, 16=1.7%, 32=3.5%, >=64=93.2% 00:22:09.164 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:09.164 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:09.164 issued rwts: total=924,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:09.164 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:09.164 job5: (groupid=0, jobs=1): err= 0: pid=3121094: Thu May 16 20:31:21 2024 00:22:09.164 read: IOPS=12, BW=12.4MiB/s (13.0MB/s)(124MiB/10012msec) 00:22:09.164 slat (usec): min=331, max=2143.1k, avg=80642.82, stdev=380762.27 00:22:09.164 clat (msec): min=11, max=9973, avg=984.15, stdev=2522.77 00:22:09.164 lat (msec): min=12, max=10011, avg=1064.79, stdev=2648.14 00:22:09.164 clat percentiles (msec): 00:22:09.164 | 1.00th=[ 13], 5.00th=[ 17], 10.00th=[ 23], 20.00th=[ 35], 00:22:09.164 | 30.00th=[ 46], 40.00th=[ 57], 50.00th=[ 68], 60.00th=[ 81], 00:22:09.164 | 70.00th=[ 91], 80.00th=[ 203], 90.00th=[ 4463], 95.00th=[ 8792], 00:22:09.164 | 99.00th=[10000], 99.50th=[10000], 99.90th=[10000], 99.95th=[10000], 00:22:09.164 | 99.99th=[10000] 00:22:09.164 lat (msec) : 20=8.06%, 50=25.81%, 100=41.94%, 250=9.68%, >=2000=14.52% 00:22:09.164 cpu : usr=0.00%, sys=0.59%, ctx=159, majf=0, minf=31745 00:22:09.164 IO depths : 1=0.8%, 2=1.6%, 4=3.2%, 8=6.5%, 16=12.9%, 32=25.8%, >=64=49.2% 00:22:09.164 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:09.164 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:22:09.164 issued rwts: total=124,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:09.164 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:09.164 job5: (groupid=0, jobs=1): err= 0: pid=3121096: Thu May 16 20:31:21 2024 00:22:09.164 read: IOPS=12, BW=12.1MiB/s (12.7MB/s)(121MiB/10015msec) 00:22:09.164 slat (usec): min=338, max=2044.3k, avg=82649.93, stdev=373865.44 00:22:09.164 clat (msec): min=13, max=9899, avg=2679.59, stdev=3690.26 00:22:09.164 lat (msec): min=14, max=10014, avg=2762.24, stdev=3741.69 00:22:09.164 clat percentiles (msec): 00:22:09.164 | 1.00th=[ 15], 5.00th=[ 20], 10.00th=[ 25], 20.00th=[ 36], 00:22:09.164 | 30.00th=[ 48], 40.00th=[ 59], 50.00th=[ 72], 60.00th=[ 197], 00:22:09.164 | 70.00th=[ 4463], 80.00th=[ 8658], 90.00th=[ 8792], 95.00th=[ 8792], 00:22:09.164 | 99.00th=[ 9866], 99.50th=[ 9866], 99.90th=[ 9866], 99.95th=[ 9866], 00:22:09.164 | 99.99th=[ 9866] 00:22:09.164 lat (msec) : 20=6.61%, 50=25.62%, 100=26.45%, 250=3.31%, >=2000=38.02% 00:22:09.164 cpu : usr=0.00%, sys=0.65%, ctx=148, majf=0, minf=30977 00:22:09.164 IO depths : 1=0.8%, 2=1.7%, 4=3.3%, 8=6.6%, 16=13.2%, 32=26.4%, >=64=47.9% 00:22:09.164 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:09.164 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:22:09.164 issued rwts: total=121,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:09.164 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:09.165 job5: (groupid=0, jobs=1): err= 0: pid=3121097: Thu May 16 20:31:21 2024 00:22:09.165 read: IOPS=4, BW=4603KiB/s (4713kB/s)(54.0MiB/12013msec) 00:22:09.165 slat (usec): min=610, max=2080.5k, avg=185538.02, stdev=555432.29 00:22:09.165 clat (msec): min=1992, max=12011, avg=8858.85, stdev=3427.23 00:22:09.165 lat (msec): min=2042, max=12011, avg=9044.39, stdev=3317.97 00:22:09.165 clat percentiles (msec): 00:22:09.165 | 1.00th=[ 1989], 5.00th=[ 2089], 10.00th=[ 4178], 20.00th=[ 4279], 00:22:09.165 | 30.00th=[ 8490], 40.00th=[ 8557], 50.00th=[10671], 60.00th=[10671], 00:22:09.165 | 70.00th=[11879], 80.00th=[12013], 90.00th=[12013], 95.00th=[12013], 00:22:09.165 | 99.00th=[12013], 99.50th=[12013], 99.90th=[12013], 99.95th=[12013], 00:22:09.165 | 99.99th=[12013] 00:22:09.165 lat (msec) : 2000=1.85%, >=2000=98.15% 00:22:09.165 cpu : usr=0.02%, sys=0.29%, ctx=101, majf=0, minf=13825 00:22:09.165 IO depths : 1=1.9%, 2=3.7%, 4=7.4%, 8=14.8%, 16=29.6%, 32=42.6%, >=64=0.0% 00:22:09.165 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:09.165 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:22:09.165 issued rwts: total=54,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:09.165 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:09.165 job5: (groupid=0, jobs=1): err= 0: pid=3121098: Thu May 16 20:31:21 2024 00:22:09.165 read: IOPS=3, BW=4076KiB/s (4174kB/s)(48.0MiB/12059msec) 00:22:09.165 slat (usec): min=711, max=2087.9k, avg=248732.68, stdev=642102.78 00:22:09.165 clat (msec): min=119, max=12057, avg=8041.90, stdev=3937.60 00:22:09.165 lat (msec): min=2139, max=12058, avg=8290.63, stdev=3801.23 00:22:09.165 clat percentiles (msec): 00:22:09.165 | 1.00th=[ 120], 5.00th=[ 2165], 10.00th=[ 2232], 20.00th=[ 4329], 00:22:09.165 | 30.00th=[ 4396], 40.00th=[ 6544], 50.00th=[ 8658], 60.00th=[10805], 00:22:09.165 | 70.00th=[12013], 80.00th=[12013], 90.00th=[12013], 95.00th=[12013], 00:22:09.165 | 99.00th=[12013], 99.50th=[12013], 99.90th=[12013], 99.95th=[12013], 00:22:09.165 | 99.99th=[12013] 00:22:09.165 lat (msec) : 250=2.08%, >=2000=97.92% 00:22:09.165 cpu : usr=0.00%, sys=0.31%, ctx=87, majf=0, minf=12289 00:22:09.165 IO depths : 1=2.1%, 2=4.2%, 4=8.3%, 8=16.7%, 16=33.3%, 32=35.4%, >=64=0.0% 00:22:09.165 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:09.165 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:22:09.165 issued rwts: total=48,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:09.165 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:09.165 job5: (groupid=0, jobs=1): err= 0: pid=3121099: Thu May 16 20:31:21 2024 00:22:09.165 read: IOPS=91, BW=91.2MiB/s (95.7MB/s)(1104MiB/12100msec) 00:22:09.165 slat (usec): min=49, max=2019.3k, avg=10839.53, stdev=114825.50 00:22:09.165 clat (msec): min=127, max=4863, avg=1316.94, stdev=1710.07 00:22:09.165 lat (msec): min=245, max=4865, avg=1327.78, stdev=1715.67 00:22:09.165 clat percentiles (msec): 00:22:09.165 | 1.00th=[ 249], 5.00th=[ 257], 10.00th=[ 264], 20.00th=[ 266], 00:22:09.165 | 30.00th=[ 268], 40.00th=[ 271], 50.00th=[ 275], 60.00th=[ 363], 00:22:09.165 | 70.00th=[ 472], 80.00th=[ 4111], 90.00th=[ 4530], 95.00th=[ 4665], 00:22:09.165 | 99.00th=[ 4799], 99.50th=[ 4799], 99.90th=[ 4866], 99.95th=[ 4866], 00:22:09.165 | 99.99th=[ 4866] 00:22:09.165 bw ( KiB/s): min= 2043, max=491520, per=7.23%, avg=181911.45, stdev=198487.77, samples=11 00:22:09.165 iops : min= 1, max= 480, avg=177.55, stdev=193.93, samples=11 00:22:09.165 lat (msec) : 250=1.54%, 500=68.48%, 2000=4.17%, >=2000=25.82% 00:22:09.165 cpu : usr=0.07%, sys=1.38%, ctx=1310, majf=0, minf=32769 00:22:09.165 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.7%, 16=1.4%, 32=2.9%, >=64=94.3% 00:22:09.165 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:09.165 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:09.165 issued rwts: total=1104,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:09.165 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:09.165 job5: (groupid=0, jobs=1): err= 0: pid=3121100: Thu May 16 20:31:21 2024 00:22:09.165 read: IOPS=4, BW=4302KiB/s (4405kB/s)(51.0MiB/12140msec) 00:22:09.165 slat (usec): min=742, max=2124.2k, avg=235568.64, stdev=628495.04 00:22:09.165 clat (msec): min=125, max=12138, avg=9937.84, stdev=3478.86 00:22:09.165 lat (msec): min=2166, max=12139, avg=10173.41, stdev=3196.41 00:22:09.165 clat percentiles (msec): 00:22:09.165 | 1.00th=[ 126], 5.00th=[ 2232], 10.00th=[ 4329], 20.00th=[ 8658], 00:22:09.165 | 30.00th=[10805], 40.00th=[11879], 50.00th=[12013], 60.00th=[12147], 00:22:09.165 | 70.00th=[12147], 80.00th=[12147], 90.00th=[12147], 95.00th=[12147], 00:22:09.165 | 99.00th=[12147], 99.50th=[12147], 99.90th=[12147], 99.95th=[12147], 00:22:09.165 | 99.99th=[12147] 00:22:09.165 lat (msec) : 250=1.96%, >=2000=98.04% 00:22:09.165 cpu : usr=0.00%, sys=0.35%, ctx=121, majf=0, minf=13057 00:22:09.165 IO depths : 1=2.0%, 2=3.9%, 4=7.8%, 8=15.7%, 16=31.4%, 32=39.2%, >=64=0.0% 00:22:09.165 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:09.165 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:22:09.165 issued rwts: total=51,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:09.165 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:09.165 00:22:09.165 Run status group 0 (all jobs): 00:22:09.165 READ: bw=2459MiB/s (2578MB/s), 1020KiB/s-255MiB/s (1045kB/s-268MB/s), io=34.2GiB (36.7GB), run=10012-14235msec 00:22:09.165 00:22:09.165 Disk stats (read/write): 00:22:09.165 nvme0n1: ios=27117/0, merge=0/0, ticks=8818037/0, in_queue=8818037, util=99.04% 00:22:09.165 nvme1n1: ios=22660/0, merge=0/0, ticks=10644755/0, in_queue=10644755, util=99.07% 00:22:09.165 nvme2n1: ios=14273/0, merge=0/0, ticks=8569758/0, in_queue=8569758, util=99.23% 00:22:09.165 nvme3n1: ios=93694/0, merge=0/0, ticks=14167049/0, in_queue=14167049, util=99.19% 00:22:09.165 nvme4n1: ios=47607/0, merge=0/0, ticks=9045646/0, in_queue=9045646, util=99.34% 00:22:09.165 nvme5n1: ios=73269/0, merge=0/0, ticks=9906264/0, in_queue=9906264, util=99.29% 00:22:09.165 20:31:21 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@38 -- # sync 00:22:09.165 20:31:21 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # seq 0 5 00:22:09.165 20:31:21 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:22:09.165 20:31:21 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode0 00:22:10.159 NQN:nqn.2016-06.io.spdk:cnode0 disconnected 1 controller(s) 00:22:10.159 20:31:22 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000000 00:22:10.159 20:31:22 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1215 -- # local i=0 00:22:10.159 20:31:22 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:22:10.159 20:31:22 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1216 -- # grep -q -w SPDK00000000000000 00:22:10.159 20:31:22 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:22:10.159 20:31:22 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1223 -- # grep -q -w SPDK00000000000000 00:22:10.159 20:31:22 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1227 -- # return 0 00:22:10.159 20:31:22 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:22:10.159 20:31:22 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:10.159 20:31:22 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:22:10.159 20:31:22 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:10.159 20:31:22 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:22:10.159 20:31:22 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:22:11.095 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:22:11.095 20:31:23 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000001 00:22:11.095 20:31:23 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1215 -- # local i=0 00:22:11.095 20:31:23 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:22:11.096 20:31:23 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1216 -- # grep -q -w SPDK00000000000001 00:22:11.096 20:31:23 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:22:11.096 20:31:23 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1223 -- # grep -q -w SPDK00000000000001 00:22:11.096 20:31:23 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1227 -- # return 0 00:22:11.096 20:31:23 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:11.096 20:31:23 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:11.096 20:31:23 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:22:11.096 20:31:23 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:11.096 20:31:23 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:22:11.096 20:31:23 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:22:12.032 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:22:12.032 20:31:24 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000002 00:22:12.032 20:31:24 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1215 -- # local i=0 00:22:12.032 20:31:24 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:22:12.032 20:31:24 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1216 -- # grep -q -w SPDK00000000000002 00:22:12.032 20:31:24 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:22:12.032 20:31:24 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1223 -- # grep -q -w SPDK00000000000002 00:22:12.032 20:31:24 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1227 -- # return 0 00:22:12.032 20:31:24 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:22:12.032 20:31:24 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:12.032 20:31:24 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:22:12.032 20:31:24 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:12.032 20:31:24 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:22:12.032 20:31:24 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:22:12.968 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:22:12.968 20:31:25 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000003 00:22:12.968 20:31:25 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1215 -- # local i=0 00:22:12.968 20:31:25 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:22:12.968 20:31:25 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1216 -- # grep -q -w SPDK00000000000003 00:22:12.968 20:31:25 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:22:12.968 20:31:25 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1223 -- # grep -q -w SPDK00000000000003 00:22:12.968 20:31:25 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1227 -- # return 0 00:22:12.968 20:31:25 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:22:12.968 20:31:25 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:12.968 20:31:25 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:22:12.968 20:31:25 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:12.968 20:31:25 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:22:12.968 20:31:25 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:22:13.906 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:22:13.906 20:31:26 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000004 00:22:13.906 20:31:26 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1215 -- # local i=0 00:22:13.906 20:31:26 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:22:13.906 20:31:26 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1216 -- # grep -q -w SPDK00000000000004 00:22:13.906 20:31:26 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1223 -- # grep -q -w SPDK00000000000004 00:22:13.906 20:31:26 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:22:13.906 20:31:26 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1227 -- # return 0 00:22:13.906 20:31:26 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:22:13.906 20:31:26 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:13.906 20:31:26 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:22:13.906 20:31:26 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:13.906 20:31:26 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:22:13.906 20:31:26 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:22:14.868 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:22:14.868 20:31:27 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000005 00:22:14.869 20:31:27 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1215 -- # local i=0 00:22:14.869 20:31:27 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:22:14.869 20:31:27 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1216 -- # grep -q -w SPDK00000000000005 00:22:14.869 20:31:27 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1223 -- # grep -q -w SPDK00000000000005 00:22:14.869 20:31:27 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:22:14.869 20:31:27 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1227 -- # return 0 00:22:14.869 20:31:27 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:22:14.869 20:31:27 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:14.869 20:31:27 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:22:14.869 20:31:27 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:14.869 20:31:27 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:22:14.869 20:31:27 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@48 -- # nvmftestfini 00:22:14.869 20:31:27 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:14.869 20:31:27 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # sync 00:22:14.869 20:31:27 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:22:14.869 20:31:27 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:22:14.869 20:31:27 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@120 -- # set +e 00:22:14.869 20:31:27 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:14.869 20:31:27 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:22:14.869 rmmod nvme_rdma 00:22:14.869 rmmod nvme_fabrics 00:22:14.869 20:31:27 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:14.869 20:31:27 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@124 -- # set -e 00:22:14.869 20:31:27 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@125 -- # return 0 00:22:14.869 20:31:27 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@489 -- # '[' -n 3119477 ']' 00:22:14.869 20:31:27 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@490 -- # killprocess 3119477 00:22:14.869 20:31:27 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@946 -- # '[' -z 3119477 ']' 00:22:14.869 20:31:27 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@950 -- # kill -0 3119477 00:22:14.869 20:31:27 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@951 -- # uname 00:22:14.869 20:31:27 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:15.127 20:31:27 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3119477 00:22:15.127 20:31:27 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:22:15.127 20:31:27 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:22:15.127 20:31:27 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3119477' 00:22:15.127 killing process with pid 3119477 00:22:15.127 20:31:27 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@965 -- # kill 3119477 00:22:15.127 [2024-05-16 20:31:27.899704] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:22:15.127 20:31:27 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@970 -- # wait 3119477 00:22:15.127 [2024-05-16 20:31:27.948672] rdma.c:2885:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:22:15.386 20:31:28 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:15.386 20:31:28 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:22:15.386 00:22:15.386 real 0m34.529s 00:22:15.386 user 2m1.102s 00:22:15.386 sys 0m13.168s 00:22:15.386 20:31:28 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1122 -- # xtrace_disable 00:22:15.386 20:31:28 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:22:15.386 ************************************ 00:22:15.386 END TEST nvmf_srq_overwhelm 00:22:15.386 ************************************ 00:22:15.386 20:31:28 nvmf_rdma -- nvmf/nvmf.sh@82 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=rdma 00:22:15.386 20:31:28 nvmf_rdma -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:22:15.386 20:31:28 nvmf_rdma -- common/autotest_common.sh@1103 -- # xtrace_disable 00:22:15.386 20:31:28 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:22:15.386 ************************************ 00:22:15.386 START TEST nvmf_shutdown 00:22:15.386 ************************************ 00:22:15.386 20:31:28 nvmf_rdma.nvmf_shutdown -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=rdma 00:22:15.646 * Looking for test storage... 00:22:15.646 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:22:15.646 20:31:28 nvmf_rdma.nvmf_shutdown -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:22:15.646 20:31:28 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:22:15.646 20:31:28 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:15.646 20:31:28 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:15.646 20:31:28 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:15.646 20:31:28 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:15.646 20:31:28 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:15.646 20:31:28 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:15.646 20:31:28 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:15.646 20:31:28 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:15.646 20:31:28 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:15.647 20:31:28 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:15.647 20:31:28 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:22:15.647 20:31:28 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:22:15.647 20:31:28 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:15.647 20:31:28 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:15.647 20:31:28 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:15.647 20:31:28 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:15.647 20:31:28 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:22:15.647 20:31:28 nvmf_rdma.nvmf_shutdown -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:15.647 20:31:28 nvmf_rdma.nvmf_shutdown -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:15.647 20:31:28 nvmf_rdma.nvmf_shutdown -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:15.647 20:31:28 nvmf_rdma.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:15.647 20:31:28 nvmf_rdma.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:15.647 20:31:28 nvmf_rdma.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:15.647 20:31:28 nvmf_rdma.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:22:15.647 20:31:28 nvmf_rdma.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:15.647 20:31:28 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@47 -- # : 0 00:22:15.647 20:31:28 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:15.647 20:31:28 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:15.647 20:31:28 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:15.647 20:31:28 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:15.647 20:31:28 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:15.647 20:31:28 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:15.647 20:31:28 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:15.647 20:31:28 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:15.647 20:31:28 nvmf_rdma.nvmf_shutdown -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:15.647 20:31:28 nvmf_rdma.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:15.647 20:31:28 nvmf_rdma.nvmf_shutdown -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:22:15.647 20:31:28 nvmf_rdma.nvmf_shutdown -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:22:15.647 20:31:28 nvmf_rdma.nvmf_shutdown -- common/autotest_common.sh@1103 -- # xtrace_disable 00:22:15.647 20:31:28 nvmf_rdma.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:15.647 ************************************ 00:22:15.647 START TEST nvmf_shutdown_tc1 00:22:15.647 ************************************ 00:22:15.647 20:31:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1121 -- # nvmf_shutdown_tc1 00:22:15.647 20:31:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@74 -- # starttarget 00:22:15.647 20:31:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@15 -- # nvmftestinit 00:22:15.647 20:31:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:22:15.647 20:31:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:15.647 20:31:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:15.647 20:31:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:15.647 20:31:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:15.647 20:31:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:15.647 20:31:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:15.647 20:31:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:15.647 20:31:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:15.647 20:31:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:15.647 20:31:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@285 -- # xtrace_disable 00:22:15.647 20:31:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:22.217 20:31:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:22.217 20:31:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # pci_devs=() 00:22:22.217 20:31:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:22.217 20:31:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:22.217 20:31:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:22.217 20:31:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:22.217 20:31:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:22.217 20:31:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # net_devs=() 00:22:22.217 20:31:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:22.217 20:31:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # e810=() 00:22:22.217 20:31:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # local -ga e810 00:22:22.217 20:31:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # x722=() 00:22:22.217 20:31:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # local -ga x722 00:22:22.217 20:31:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # mlx=() 00:22:22.217 20:31:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # local -ga mlx 00:22:22.217 20:31:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:22.217 20:31:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:22.217 20:31:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:22.217 20:31:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:22.217 20:31:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:22.217 20:31:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:22.217 20:31:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:22.217 20:31:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:22.217 20:31:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:22.217 20:31:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:22.217 20:31:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:22.217 20:31:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:22.217 20:31:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:22:22.217 20:31:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:22:22.217 20:31:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:22:22.217 20:31:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:22:22.217 20:31:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:22:22.217 20:31:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:22.217 20:31:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:22.217 20:31:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:22:22.217 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:22:22.217 20:31:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:22:22.217 20:31:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:22:22.217 20:31:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:22:22.217 20:31:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:22:22.217 20:31:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:22:22.217 20:31:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:22:22.217 20:31:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:22.217 20:31:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:22:22.217 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:22:22.217 20:31:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:22:22.217 20:31:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:22:22.217 20:31:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:22:22.217 20:31:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:22:22.217 20:31:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:22:22.217 20:31:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:22:22.217 20:31:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:22.217 20:31:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:22:22.217 20:31:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:22.217 20:31:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:22.217 20:31:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:22:22.217 20:31:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:22.217 20:31:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:22.217 20:31:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:22:22.217 Found net devices under 0000:da:00.0: mlx_0_0 00:22:22.217 20:31:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:22.217 20:31:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:22.217 20:31:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:22.217 20:31:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:22:22.217 20:31:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:22.217 20:31:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:22.217 20:31:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:22:22.217 Found net devices under 0000:da:00.1: mlx_0_1 00:22:22.217 20:31:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:22.217 20:31:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:22.217 20:31:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # is_hw=yes 00:22:22.217 20:31:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:22.217 20:31:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:22:22.217 20:31:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:22:22.217 20:31:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@420 -- # rdma_device_init 00:22:22.217 20:31:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:22:22.217 20:31:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@58 -- # uname 00:22:22.217 20:31:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:22:22.217 20:31:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:22:22.217 20:31:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@63 -- # modprobe ib_core 00:22:22.217 20:31:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:22:22.217 20:31:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:22:22.217 20:31:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:22:22.217 20:31:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:22:22.217 20:31:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:22:22.217 20:31:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@502 -- # allocate_nic_ips 00:22:22.217 20:31:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:22:22.217 20:31:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:22:22.217 20:31:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:22:22.217 20:31:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:22:22.217 20:31:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:22:22.217 20:31:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:22:22.217 20:31:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:22:22.217 20:31:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:22:22.217 20:31:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:22.217 20:31:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:22:22.217 20:31:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:22:22.217 20:31:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@105 -- # continue 2 00:22:22.217 20:31:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:22:22.217 20:31:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:22.217 20:31:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:22:22.217 20:31:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:22.217 20:31:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:22:22.218 20:31:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:22:22.218 20:31:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@105 -- # continue 2 00:22:22.218 20:31:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:22:22.218 20:31:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:22:22.218 20:31:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:22:22.218 20:31:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:22:22.218 20:31:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:22:22.218 20:31:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:22:22.218 20:31:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:22:22.218 20:31:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:22:22.218 20:31:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:22:22.218 262: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:22:22.218 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:22:22.218 altname enp218s0f0np0 00:22:22.218 altname ens818f0np0 00:22:22.218 inet 192.168.100.8/24 scope global mlx_0_0 00:22:22.218 valid_lft forever preferred_lft forever 00:22:22.218 20:31:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:22:22.218 20:31:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:22:22.218 20:31:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:22:22.218 20:31:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:22:22.218 20:31:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:22:22.218 20:31:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:22:22.218 20:31:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:22:22.218 20:31:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:22:22.218 20:31:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:22:22.218 263: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:22:22.218 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:22:22.218 altname enp218s0f1np1 00:22:22.218 altname ens818f1np1 00:22:22.218 inet 192.168.100.9/24 scope global mlx_0_1 00:22:22.218 valid_lft forever preferred_lft forever 00:22:22.218 20:31:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # return 0 00:22:22.218 20:31:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:22.218 20:31:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:22:22.218 20:31:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:22:22.218 20:31:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:22:22.218 20:31:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:22:22.218 20:31:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:22:22.218 20:31:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:22:22.218 20:31:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:22:22.218 20:31:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:22:22.218 20:31:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:22:22.218 20:31:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:22:22.218 20:31:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:22.218 20:31:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:22:22.218 20:31:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:22:22.218 20:31:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@105 -- # continue 2 00:22:22.218 20:31:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:22:22.218 20:31:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:22.218 20:31:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:22:22.218 20:31:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:22.218 20:31:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:22:22.218 20:31:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:22:22.218 20:31:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@105 -- # continue 2 00:22:22.218 20:31:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:22:22.218 20:31:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:22:22.218 20:31:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:22:22.218 20:31:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:22:22.218 20:31:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:22:22.218 20:31:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:22:22.218 20:31:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:22:22.218 20:31:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:22:22.218 20:31:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:22:22.218 20:31:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:22:22.218 20:31:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:22:22.218 20:31:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:22:22.218 20:31:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:22:22.218 192.168.100.9' 00:22:22.218 20:31:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:22:22.218 192.168.100.9' 00:22:22.218 20:31:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@457 -- # head -n 1 00:22:22.218 20:31:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:22:22.218 20:31:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:22:22.218 192.168.100.9' 00:22:22.218 20:31:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@458 -- # tail -n +2 00:22:22.218 20:31:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@458 -- # head -n 1 00:22:22.218 20:31:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:22:22.218 20:31:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:22:22.218 20:31:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:22:22.218 20:31:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:22:22.218 20:31:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:22:22.218 20:31:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:22:22.218 20:31:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:22:22.218 20:31:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:22.218 20:31:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@720 -- # xtrace_disable 00:22:22.218 20:31:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:22.218 20:31:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # nvmfpid=3127843 00:22:22.218 20:31:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # waitforlisten 3127843 00:22:22.218 20:31:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:22.218 20:31:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@827 -- # '[' -z 3127843 ']' 00:22:22.218 20:31:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:22.218 20:31:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:22.218 20:31:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:22.218 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:22.218 20:31:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:22.218 20:31:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:22.218 [2024-05-16 20:31:34.486759] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:22:22.218 [2024-05-16 20:31:34.486803] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:22.218 EAL: No free 2048 kB hugepages reported on node 1 00:22:22.218 [2024-05-16 20:31:34.549172] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:22.218 [2024-05-16 20:31:34.627868] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:22.218 [2024-05-16 20:31:34.627907] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:22.218 [2024-05-16 20:31:34.627913] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:22.218 [2024-05-16 20:31:34.627919] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:22.218 [2024-05-16 20:31:34.627924] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:22.218 [2024-05-16 20:31:34.628024] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:22.218 [2024-05-16 20:31:34.628132] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:22:22.218 [2024-05-16 20:31:34.628238] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:22.218 [2024-05-16 20:31:34.628240] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:22:22.478 20:31:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:22.478 20:31:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # return 0 00:22:22.478 20:31:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:22.478 20:31:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:22.478 20:31:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:22.478 20:31:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:22.478 20:31:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:22:22.478 20:31:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:22.478 20:31:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:22.478 [2024-05-16 20:31:35.361742] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xa2dca0/0xa32190) succeed. 00:22:22.478 [2024-05-16 20:31:35.372067] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xa2f2e0/0xa73820) succeed. 00:22:22.737 20:31:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:22.737 20:31:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:22:22.737 20:31:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:22:22.737 20:31:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@720 -- # xtrace_disable 00:22:22.737 20:31:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:22.737 20:31:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:22.737 20:31:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:22.737 20:31:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:22:22.737 20:31:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:22.737 20:31:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:22:22.737 20:31:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:22.737 20:31:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:22:22.737 20:31:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:22.737 20:31:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:22:22.737 20:31:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:22.737 20:31:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:22:22.737 20:31:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:22.737 20:31:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:22:22.737 20:31:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:22.737 20:31:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:22:22.737 20:31:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:22.737 20:31:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:22:22.737 20:31:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:22.737 20:31:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:22:22.737 20:31:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:22.737 20:31:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:22:22.737 20:31:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@35 -- # rpc_cmd 00:22:22.737 20:31:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:22.737 20:31:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:22.737 Malloc1 00:22:22.737 [2024-05-16 20:31:35.591901] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:22:22.737 [2024-05-16 20:31:35.592301] rdma.c:3032:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:22:22.737 Malloc2 00:22:22.737 Malloc3 00:22:22.737 Malloc4 00:22:22.996 Malloc5 00:22:22.996 Malloc6 00:22:22.996 Malloc7 00:22:22.996 Malloc8 00:22:22.996 Malloc9 00:22:22.996 Malloc10 00:22:23.255 20:31:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:23.255 20:31:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:22:23.255 20:31:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:23.255 20:31:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:23.255 20:31:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # perfpid=3128223 00:22:23.255 20:31:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # waitforlisten 3128223 /var/tmp/bdevperf.sock 00:22:23.255 20:31:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@827 -- # '[' -z 3128223 ']' 00:22:23.255 20:31:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:23.255 20:31:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:22:23.255 20:31:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:23.255 20:31:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:23.255 20:31:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:23.255 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:23.255 20:31:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:22:23.255 20:31:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:23.255 20:31:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:22:23.255 20:31:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:23.255 20:31:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:23.255 20:31:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:23.255 { 00:22:23.255 "params": { 00:22:23.255 "name": "Nvme$subsystem", 00:22:23.255 "trtype": "$TEST_TRANSPORT", 00:22:23.255 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:23.255 "adrfam": "ipv4", 00:22:23.255 "trsvcid": "$NVMF_PORT", 00:22:23.255 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:23.255 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:23.255 "hdgst": ${hdgst:-false}, 00:22:23.255 "ddgst": ${ddgst:-false} 00:22:23.255 }, 00:22:23.255 "method": "bdev_nvme_attach_controller" 00:22:23.255 } 00:22:23.255 EOF 00:22:23.255 )") 00:22:23.255 20:31:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:23.255 20:31:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:23.255 20:31:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:23.255 { 00:22:23.255 "params": { 00:22:23.255 "name": "Nvme$subsystem", 00:22:23.255 "trtype": "$TEST_TRANSPORT", 00:22:23.255 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:23.255 "adrfam": "ipv4", 00:22:23.255 "trsvcid": "$NVMF_PORT", 00:22:23.255 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:23.255 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:23.255 "hdgst": ${hdgst:-false}, 00:22:23.255 "ddgst": ${ddgst:-false} 00:22:23.255 }, 00:22:23.255 "method": "bdev_nvme_attach_controller" 00:22:23.255 } 00:22:23.255 EOF 00:22:23.255 )") 00:22:23.255 20:31:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:23.255 20:31:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:23.255 20:31:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:23.255 { 00:22:23.255 "params": { 00:22:23.255 "name": "Nvme$subsystem", 00:22:23.255 "trtype": "$TEST_TRANSPORT", 00:22:23.255 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:23.255 "adrfam": "ipv4", 00:22:23.255 "trsvcid": "$NVMF_PORT", 00:22:23.255 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:23.255 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:23.255 "hdgst": ${hdgst:-false}, 00:22:23.255 "ddgst": ${ddgst:-false} 00:22:23.255 }, 00:22:23.255 "method": "bdev_nvme_attach_controller" 00:22:23.255 } 00:22:23.255 EOF 00:22:23.255 )") 00:22:23.255 20:31:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:23.255 20:31:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:23.255 20:31:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:23.255 { 00:22:23.255 "params": { 00:22:23.255 "name": "Nvme$subsystem", 00:22:23.255 "trtype": "$TEST_TRANSPORT", 00:22:23.255 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:23.255 "adrfam": "ipv4", 00:22:23.255 "trsvcid": "$NVMF_PORT", 00:22:23.255 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:23.255 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:23.255 "hdgst": ${hdgst:-false}, 00:22:23.255 "ddgst": ${ddgst:-false} 00:22:23.255 }, 00:22:23.255 "method": "bdev_nvme_attach_controller" 00:22:23.255 } 00:22:23.255 EOF 00:22:23.255 )") 00:22:23.255 20:31:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:23.255 20:31:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:23.255 20:31:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:23.255 { 00:22:23.255 "params": { 00:22:23.255 "name": "Nvme$subsystem", 00:22:23.255 "trtype": "$TEST_TRANSPORT", 00:22:23.255 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:23.255 "adrfam": "ipv4", 00:22:23.255 "trsvcid": "$NVMF_PORT", 00:22:23.255 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:23.255 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:23.255 "hdgst": ${hdgst:-false}, 00:22:23.255 "ddgst": ${ddgst:-false} 00:22:23.255 }, 00:22:23.255 "method": "bdev_nvme_attach_controller" 00:22:23.255 } 00:22:23.255 EOF 00:22:23.255 )") 00:22:23.255 20:31:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:23.255 20:31:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:23.255 20:31:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:23.255 { 00:22:23.255 "params": { 00:22:23.255 "name": "Nvme$subsystem", 00:22:23.255 "trtype": "$TEST_TRANSPORT", 00:22:23.255 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:23.255 "adrfam": "ipv4", 00:22:23.255 "trsvcid": "$NVMF_PORT", 00:22:23.255 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:23.255 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:23.255 "hdgst": ${hdgst:-false}, 00:22:23.255 "ddgst": ${ddgst:-false} 00:22:23.255 }, 00:22:23.255 "method": "bdev_nvme_attach_controller" 00:22:23.255 } 00:22:23.255 EOF 00:22:23.255 )") 00:22:23.255 20:31:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:23.255 20:31:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:23.255 20:31:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:23.255 { 00:22:23.255 "params": { 00:22:23.256 "name": "Nvme$subsystem", 00:22:23.256 "trtype": "$TEST_TRANSPORT", 00:22:23.256 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:23.256 "adrfam": "ipv4", 00:22:23.256 "trsvcid": "$NVMF_PORT", 00:22:23.256 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:23.256 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:23.256 "hdgst": ${hdgst:-false}, 00:22:23.256 "ddgst": ${ddgst:-false} 00:22:23.256 }, 00:22:23.256 "method": "bdev_nvme_attach_controller" 00:22:23.256 } 00:22:23.256 EOF 00:22:23.256 )") 00:22:23.256 20:31:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:23.256 [2024-05-16 20:31:36.074394] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:22:23.256 [2024-05-16 20:31:36.074448] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:22:23.256 20:31:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:23.256 20:31:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:23.256 { 00:22:23.256 "params": { 00:22:23.256 "name": "Nvme$subsystem", 00:22:23.256 "trtype": "$TEST_TRANSPORT", 00:22:23.256 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:23.256 "adrfam": "ipv4", 00:22:23.256 "trsvcid": "$NVMF_PORT", 00:22:23.256 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:23.256 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:23.256 "hdgst": ${hdgst:-false}, 00:22:23.256 "ddgst": ${ddgst:-false} 00:22:23.256 }, 00:22:23.256 "method": "bdev_nvme_attach_controller" 00:22:23.256 } 00:22:23.256 EOF 00:22:23.256 )") 00:22:23.256 20:31:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:23.256 20:31:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:23.256 20:31:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:23.256 { 00:22:23.256 "params": { 00:22:23.256 "name": "Nvme$subsystem", 00:22:23.256 "trtype": "$TEST_TRANSPORT", 00:22:23.256 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:23.256 "adrfam": "ipv4", 00:22:23.256 "trsvcid": "$NVMF_PORT", 00:22:23.256 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:23.256 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:23.256 "hdgst": ${hdgst:-false}, 00:22:23.256 "ddgst": ${ddgst:-false} 00:22:23.256 }, 00:22:23.256 "method": "bdev_nvme_attach_controller" 00:22:23.256 } 00:22:23.256 EOF 00:22:23.256 )") 00:22:23.256 20:31:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:23.256 20:31:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:23.256 20:31:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:23.256 { 00:22:23.256 "params": { 00:22:23.256 "name": "Nvme$subsystem", 00:22:23.256 "trtype": "$TEST_TRANSPORT", 00:22:23.256 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:23.256 "adrfam": "ipv4", 00:22:23.256 "trsvcid": "$NVMF_PORT", 00:22:23.256 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:23.256 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:23.256 "hdgst": ${hdgst:-false}, 00:22:23.256 "ddgst": ${ddgst:-false} 00:22:23.256 }, 00:22:23.256 "method": "bdev_nvme_attach_controller" 00:22:23.256 } 00:22:23.256 EOF 00:22:23.256 )") 00:22:23.256 20:31:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:23.256 20:31:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:22:23.256 20:31:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:22:23.256 20:31:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:22:23.256 "params": { 00:22:23.256 "name": "Nvme1", 00:22:23.256 "trtype": "rdma", 00:22:23.256 "traddr": "192.168.100.8", 00:22:23.256 "adrfam": "ipv4", 00:22:23.256 "trsvcid": "4420", 00:22:23.256 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:23.256 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:23.256 "hdgst": false, 00:22:23.256 "ddgst": false 00:22:23.256 }, 00:22:23.256 "method": "bdev_nvme_attach_controller" 00:22:23.256 },{ 00:22:23.256 "params": { 00:22:23.256 "name": "Nvme2", 00:22:23.256 "trtype": "rdma", 00:22:23.256 "traddr": "192.168.100.8", 00:22:23.256 "adrfam": "ipv4", 00:22:23.256 "trsvcid": "4420", 00:22:23.256 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:23.256 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:23.256 "hdgst": false, 00:22:23.256 "ddgst": false 00:22:23.256 }, 00:22:23.256 "method": "bdev_nvme_attach_controller" 00:22:23.256 },{ 00:22:23.256 "params": { 00:22:23.256 "name": "Nvme3", 00:22:23.256 "trtype": "rdma", 00:22:23.256 "traddr": "192.168.100.8", 00:22:23.256 "adrfam": "ipv4", 00:22:23.256 "trsvcid": "4420", 00:22:23.256 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:23.256 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:23.256 "hdgst": false, 00:22:23.256 "ddgst": false 00:22:23.256 }, 00:22:23.256 "method": "bdev_nvme_attach_controller" 00:22:23.256 },{ 00:22:23.256 "params": { 00:22:23.256 "name": "Nvme4", 00:22:23.256 "trtype": "rdma", 00:22:23.256 "traddr": "192.168.100.8", 00:22:23.256 "adrfam": "ipv4", 00:22:23.256 "trsvcid": "4420", 00:22:23.256 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:23.256 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:23.256 "hdgst": false, 00:22:23.256 "ddgst": false 00:22:23.256 }, 00:22:23.256 "method": "bdev_nvme_attach_controller" 00:22:23.256 },{ 00:22:23.256 "params": { 00:22:23.256 "name": "Nvme5", 00:22:23.256 "trtype": "rdma", 00:22:23.256 "traddr": "192.168.100.8", 00:22:23.256 "adrfam": "ipv4", 00:22:23.256 "trsvcid": "4420", 00:22:23.256 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:23.256 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:23.256 "hdgst": false, 00:22:23.256 "ddgst": false 00:22:23.256 }, 00:22:23.256 "method": "bdev_nvme_attach_controller" 00:22:23.256 },{ 00:22:23.256 "params": { 00:22:23.256 "name": "Nvme6", 00:22:23.256 "trtype": "rdma", 00:22:23.256 "traddr": "192.168.100.8", 00:22:23.256 "adrfam": "ipv4", 00:22:23.256 "trsvcid": "4420", 00:22:23.256 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:23.256 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:23.256 "hdgst": false, 00:22:23.256 "ddgst": false 00:22:23.256 }, 00:22:23.256 "method": "bdev_nvme_attach_controller" 00:22:23.256 },{ 00:22:23.256 "params": { 00:22:23.256 "name": "Nvme7", 00:22:23.256 "trtype": "rdma", 00:22:23.256 "traddr": "192.168.100.8", 00:22:23.256 "adrfam": "ipv4", 00:22:23.256 "trsvcid": "4420", 00:22:23.256 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:23.256 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:23.256 "hdgst": false, 00:22:23.256 "ddgst": false 00:22:23.256 }, 00:22:23.256 "method": "bdev_nvme_attach_controller" 00:22:23.256 },{ 00:22:23.256 "params": { 00:22:23.256 "name": "Nvme8", 00:22:23.256 "trtype": "rdma", 00:22:23.256 "traddr": "192.168.100.8", 00:22:23.256 "adrfam": "ipv4", 00:22:23.256 "trsvcid": "4420", 00:22:23.256 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:23.256 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:23.256 "hdgst": false, 00:22:23.256 "ddgst": false 00:22:23.256 }, 00:22:23.256 "method": "bdev_nvme_attach_controller" 00:22:23.256 },{ 00:22:23.256 "params": { 00:22:23.256 "name": "Nvme9", 00:22:23.256 "trtype": "rdma", 00:22:23.256 "traddr": "192.168.100.8", 00:22:23.256 "adrfam": "ipv4", 00:22:23.256 "trsvcid": "4420", 00:22:23.256 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:23.256 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:23.256 "hdgst": false, 00:22:23.256 "ddgst": false 00:22:23.256 }, 00:22:23.256 "method": "bdev_nvme_attach_controller" 00:22:23.256 },{ 00:22:23.256 "params": { 00:22:23.256 "name": "Nvme10", 00:22:23.256 "trtype": "rdma", 00:22:23.256 "traddr": "192.168.100.8", 00:22:23.256 "adrfam": "ipv4", 00:22:23.256 "trsvcid": "4420", 00:22:23.256 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:23.256 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:23.256 "hdgst": false, 00:22:23.256 "ddgst": false 00:22:23.256 }, 00:22:23.256 "method": "bdev_nvme_attach_controller" 00:22:23.256 }' 00:22:23.256 EAL: No free 2048 kB hugepages reported on node 1 00:22:23.256 [2024-05-16 20:31:36.139446] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:23.256 [2024-05-16 20:31:36.213663] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:24.191 20:31:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:24.192 20:31:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # return 0 00:22:24.192 20:31:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:22:24.192 20:31:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:24.192 20:31:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:24.192 20:31:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:24.192 20:31:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@83 -- # kill -9 3128223 00:22:24.192 20:31:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:22:24.192 20:31:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@87 -- # sleep 1 00:22:25.126 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 3128223 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:22:25.126 20:31:38 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # kill -0 3127843 00:22:25.126 20:31:38 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:22:25.126 20:31:38 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:25.126 20:31:38 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:22:25.126 20:31:38 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:22:25.126 20:31:38 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:25.126 20:31:38 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:25.126 { 00:22:25.126 "params": { 00:22:25.126 "name": "Nvme$subsystem", 00:22:25.126 "trtype": "$TEST_TRANSPORT", 00:22:25.126 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:25.126 "adrfam": "ipv4", 00:22:25.126 "trsvcid": "$NVMF_PORT", 00:22:25.127 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:25.127 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:25.127 "hdgst": ${hdgst:-false}, 00:22:25.127 "ddgst": ${ddgst:-false} 00:22:25.127 }, 00:22:25.127 "method": "bdev_nvme_attach_controller" 00:22:25.127 } 00:22:25.127 EOF 00:22:25.127 )") 00:22:25.127 20:31:38 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:25.127 20:31:38 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:25.127 20:31:38 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:25.127 { 00:22:25.127 "params": { 00:22:25.127 "name": "Nvme$subsystem", 00:22:25.127 "trtype": "$TEST_TRANSPORT", 00:22:25.127 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:25.127 "adrfam": "ipv4", 00:22:25.127 "trsvcid": "$NVMF_PORT", 00:22:25.127 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:25.127 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:25.127 "hdgst": ${hdgst:-false}, 00:22:25.127 "ddgst": ${ddgst:-false} 00:22:25.127 }, 00:22:25.127 "method": "bdev_nvme_attach_controller" 00:22:25.127 } 00:22:25.127 EOF 00:22:25.127 )") 00:22:25.127 20:31:38 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:25.127 20:31:38 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:25.127 20:31:38 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:25.127 { 00:22:25.127 "params": { 00:22:25.127 "name": "Nvme$subsystem", 00:22:25.127 "trtype": "$TEST_TRANSPORT", 00:22:25.127 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:25.127 "adrfam": "ipv4", 00:22:25.127 "trsvcid": "$NVMF_PORT", 00:22:25.127 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:25.127 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:25.127 "hdgst": ${hdgst:-false}, 00:22:25.127 "ddgst": ${ddgst:-false} 00:22:25.127 }, 00:22:25.127 "method": "bdev_nvme_attach_controller" 00:22:25.127 } 00:22:25.127 EOF 00:22:25.127 )") 00:22:25.127 20:31:38 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:25.127 20:31:38 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:25.127 20:31:38 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:25.127 { 00:22:25.127 "params": { 00:22:25.127 "name": "Nvme$subsystem", 00:22:25.127 "trtype": "$TEST_TRANSPORT", 00:22:25.127 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:25.127 "adrfam": "ipv4", 00:22:25.127 "trsvcid": "$NVMF_PORT", 00:22:25.127 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:25.127 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:25.127 "hdgst": ${hdgst:-false}, 00:22:25.127 "ddgst": ${ddgst:-false} 00:22:25.127 }, 00:22:25.127 "method": "bdev_nvme_attach_controller" 00:22:25.127 } 00:22:25.127 EOF 00:22:25.127 )") 00:22:25.127 20:31:38 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:25.127 20:31:38 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:25.127 20:31:38 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:25.127 { 00:22:25.127 "params": { 00:22:25.127 "name": "Nvme$subsystem", 00:22:25.127 "trtype": "$TEST_TRANSPORT", 00:22:25.127 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:25.127 "adrfam": "ipv4", 00:22:25.127 "trsvcid": "$NVMF_PORT", 00:22:25.127 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:25.127 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:25.127 "hdgst": ${hdgst:-false}, 00:22:25.127 "ddgst": ${ddgst:-false} 00:22:25.127 }, 00:22:25.127 "method": "bdev_nvme_attach_controller" 00:22:25.127 } 00:22:25.127 EOF 00:22:25.127 )") 00:22:25.127 20:31:38 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:25.127 20:31:38 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:25.127 20:31:38 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:25.127 { 00:22:25.127 "params": { 00:22:25.127 "name": "Nvme$subsystem", 00:22:25.127 "trtype": "$TEST_TRANSPORT", 00:22:25.127 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:25.127 "adrfam": "ipv4", 00:22:25.127 "trsvcid": "$NVMF_PORT", 00:22:25.127 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:25.127 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:25.127 "hdgst": ${hdgst:-false}, 00:22:25.127 "ddgst": ${ddgst:-false} 00:22:25.127 }, 00:22:25.127 "method": "bdev_nvme_attach_controller" 00:22:25.127 } 00:22:25.127 EOF 00:22:25.127 )") 00:22:25.127 20:31:38 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:25.127 [2024-05-16 20:31:38.116348] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:22:25.127 [2024-05-16 20:31:38.116396] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3128531 ] 00:22:25.127 20:31:38 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:25.127 20:31:38 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:25.127 { 00:22:25.127 "params": { 00:22:25.127 "name": "Nvme$subsystem", 00:22:25.127 "trtype": "$TEST_TRANSPORT", 00:22:25.127 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:25.127 "adrfam": "ipv4", 00:22:25.127 "trsvcid": "$NVMF_PORT", 00:22:25.127 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:25.127 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:25.127 "hdgst": ${hdgst:-false}, 00:22:25.127 "ddgst": ${ddgst:-false} 00:22:25.127 }, 00:22:25.127 "method": "bdev_nvme_attach_controller" 00:22:25.127 } 00:22:25.127 EOF 00:22:25.127 )") 00:22:25.127 20:31:38 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:25.387 20:31:38 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:25.387 20:31:38 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:25.387 { 00:22:25.387 "params": { 00:22:25.387 "name": "Nvme$subsystem", 00:22:25.387 "trtype": "$TEST_TRANSPORT", 00:22:25.387 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:25.387 "adrfam": "ipv4", 00:22:25.387 "trsvcid": "$NVMF_PORT", 00:22:25.387 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:25.387 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:25.387 "hdgst": ${hdgst:-false}, 00:22:25.387 "ddgst": ${ddgst:-false} 00:22:25.387 }, 00:22:25.387 "method": "bdev_nvme_attach_controller" 00:22:25.387 } 00:22:25.387 EOF 00:22:25.387 )") 00:22:25.387 20:31:38 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:25.387 20:31:38 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:25.387 20:31:38 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:25.387 { 00:22:25.387 "params": { 00:22:25.387 "name": "Nvme$subsystem", 00:22:25.387 "trtype": "$TEST_TRANSPORT", 00:22:25.387 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:25.387 "adrfam": "ipv4", 00:22:25.387 "trsvcid": "$NVMF_PORT", 00:22:25.387 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:25.387 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:25.387 "hdgst": ${hdgst:-false}, 00:22:25.387 "ddgst": ${ddgst:-false} 00:22:25.387 }, 00:22:25.387 "method": "bdev_nvme_attach_controller" 00:22:25.387 } 00:22:25.387 EOF 00:22:25.387 )") 00:22:25.387 20:31:38 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:25.387 20:31:38 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:25.387 20:31:38 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:25.387 { 00:22:25.387 "params": { 00:22:25.387 "name": "Nvme$subsystem", 00:22:25.387 "trtype": "$TEST_TRANSPORT", 00:22:25.387 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:25.387 "adrfam": "ipv4", 00:22:25.387 "trsvcid": "$NVMF_PORT", 00:22:25.387 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:25.387 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:25.387 "hdgst": ${hdgst:-false}, 00:22:25.387 "ddgst": ${ddgst:-false} 00:22:25.387 }, 00:22:25.387 "method": "bdev_nvme_attach_controller" 00:22:25.387 } 00:22:25.387 EOF 00:22:25.387 )") 00:22:25.387 20:31:38 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:25.387 20:31:38 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:22:25.387 EAL: No free 2048 kB hugepages reported on node 1 00:22:25.387 20:31:38 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:22:25.387 20:31:38 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:22:25.387 "params": { 00:22:25.387 "name": "Nvme1", 00:22:25.387 "trtype": "rdma", 00:22:25.387 "traddr": "192.168.100.8", 00:22:25.387 "adrfam": "ipv4", 00:22:25.387 "trsvcid": "4420", 00:22:25.387 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:25.387 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:25.387 "hdgst": false, 00:22:25.387 "ddgst": false 00:22:25.387 }, 00:22:25.387 "method": "bdev_nvme_attach_controller" 00:22:25.387 },{ 00:22:25.387 "params": { 00:22:25.387 "name": "Nvme2", 00:22:25.387 "trtype": "rdma", 00:22:25.387 "traddr": "192.168.100.8", 00:22:25.387 "adrfam": "ipv4", 00:22:25.387 "trsvcid": "4420", 00:22:25.387 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:25.387 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:25.387 "hdgst": false, 00:22:25.387 "ddgst": false 00:22:25.387 }, 00:22:25.387 "method": "bdev_nvme_attach_controller" 00:22:25.387 },{ 00:22:25.387 "params": { 00:22:25.387 "name": "Nvme3", 00:22:25.387 "trtype": "rdma", 00:22:25.387 "traddr": "192.168.100.8", 00:22:25.387 "adrfam": "ipv4", 00:22:25.387 "trsvcid": "4420", 00:22:25.387 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:25.387 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:25.387 "hdgst": false, 00:22:25.387 "ddgst": false 00:22:25.387 }, 00:22:25.387 "method": "bdev_nvme_attach_controller" 00:22:25.387 },{ 00:22:25.387 "params": { 00:22:25.387 "name": "Nvme4", 00:22:25.387 "trtype": "rdma", 00:22:25.387 "traddr": "192.168.100.8", 00:22:25.387 "adrfam": "ipv4", 00:22:25.387 "trsvcid": "4420", 00:22:25.387 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:25.387 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:25.387 "hdgst": false, 00:22:25.387 "ddgst": false 00:22:25.387 }, 00:22:25.387 "method": "bdev_nvme_attach_controller" 00:22:25.387 },{ 00:22:25.387 "params": { 00:22:25.387 "name": "Nvme5", 00:22:25.387 "trtype": "rdma", 00:22:25.387 "traddr": "192.168.100.8", 00:22:25.387 "adrfam": "ipv4", 00:22:25.387 "trsvcid": "4420", 00:22:25.387 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:25.387 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:25.387 "hdgst": false, 00:22:25.387 "ddgst": false 00:22:25.387 }, 00:22:25.387 "method": "bdev_nvme_attach_controller" 00:22:25.387 },{ 00:22:25.387 "params": { 00:22:25.387 "name": "Nvme6", 00:22:25.387 "trtype": "rdma", 00:22:25.387 "traddr": "192.168.100.8", 00:22:25.387 "adrfam": "ipv4", 00:22:25.387 "trsvcid": "4420", 00:22:25.387 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:25.387 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:25.387 "hdgst": false, 00:22:25.387 "ddgst": false 00:22:25.387 }, 00:22:25.387 "method": "bdev_nvme_attach_controller" 00:22:25.387 },{ 00:22:25.387 "params": { 00:22:25.387 "name": "Nvme7", 00:22:25.387 "trtype": "rdma", 00:22:25.387 "traddr": "192.168.100.8", 00:22:25.387 "adrfam": "ipv4", 00:22:25.387 "trsvcid": "4420", 00:22:25.388 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:25.388 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:25.388 "hdgst": false, 00:22:25.388 "ddgst": false 00:22:25.388 }, 00:22:25.388 "method": "bdev_nvme_attach_controller" 00:22:25.388 },{ 00:22:25.388 "params": { 00:22:25.388 "name": "Nvme8", 00:22:25.388 "trtype": "rdma", 00:22:25.388 "traddr": "192.168.100.8", 00:22:25.388 "adrfam": "ipv4", 00:22:25.388 "trsvcid": "4420", 00:22:25.388 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:25.388 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:25.388 "hdgst": false, 00:22:25.388 "ddgst": false 00:22:25.388 }, 00:22:25.388 "method": "bdev_nvme_attach_controller" 00:22:25.388 },{ 00:22:25.388 "params": { 00:22:25.388 "name": "Nvme9", 00:22:25.388 "trtype": "rdma", 00:22:25.388 "traddr": "192.168.100.8", 00:22:25.388 "adrfam": "ipv4", 00:22:25.388 "trsvcid": "4420", 00:22:25.388 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:25.388 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:25.388 "hdgst": false, 00:22:25.388 "ddgst": false 00:22:25.388 }, 00:22:25.388 "method": "bdev_nvme_attach_controller" 00:22:25.388 },{ 00:22:25.388 "params": { 00:22:25.388 "name": "Nvme10", 00:22:25.388 "trtype": "rdma", 00:22:25.388 "traddr": "192.168.100.8", 00:22:25.388 "adrfam": "ipv4", 00:22:25.388 "trsvcid": "4420", 00:22:25.388 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:25.388 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:25.388 "hdgst": false, 00:22:25.388 "ddgst": false 00:22:25.388 }, 00:22:25.388 "method": "bdev_nvme_attach_controller" 00:22:25.388 }' 00:22:25.388 [2024-05-16 20:31:38.180601] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:25.388 [2024-05-16 20:31:38.254440] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:26.325 Running I/O for 1 seconds... 00:22:27.706 00:22:27.706 Latency(us) 00:22:27.706 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:27.706 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:27.706 Verification LBA range: start 0x0 length 0x400 00:22:27.706 Nvme1n1 : 1.17 349.05 21.82 0.00 0.00 174240.55 9736.78 243669.09 00:22:27.706 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:27.706 Verification LBA range: start 0x0 length 0x400 00:22:27.706 Nvme2n1 : 1.17 354.69 22.17 0.00 0.00 169589.50 11047.50 171766.74 00:22:27.707 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:27.707 Verification LBA range: start 0x0 length 0x400 00:22:27.707 Nvme3n1 : 1.18 380.29 23.77 0.00 0.00 161600.37 6335.15 165774.87 00:22:27.707 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:27.707 Verification LBA range: start 0x0 length 0x400 00:22:27.707 Nvme4n1 : 1.18 388.39 24.27 0.00 0.00 156015.30 5929.45 153791.15 00:22:27.707 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:27.707 Verification LBA range: start 0x0 length 0x400 00:22:27.707 Nvme5n1 : 1.18 379.45 23.72 0.00 0.00 157718.92 7177.75 146800.64 00:22:27.707 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:27.707 Verification LBA range: start 0x0 length 0x400 00:22:27.707 Nvme6n1 : 1.18 379.08 23.69 0.00 0.00 155401.44 7521.04 139810.13 00:22:27.707 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:27.707 Verification LBA range: start 0x0 length 0x400 00:22:27.707 Nvme7n1 : 1.18 378.75 23.67 0.00 0.00 153168.18 7677.07 131820.98 00:22:27.707 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:27.707 Verification LBA range: start 0x0 length 0x400 00:22:27.707 Nvme8n1 : 1.18 378.38 23.65 0.00 0.00 151216.34 7864.32 127327.09 00:22:27.707 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:27.707 Verification LBA range: start 0x0 length 0x400 00:22:27.707 Nvme9n1 : 1.19 377.93 23.62 0.00 0.00 149486.17 8363.64 115842.68 00:22:27.707 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:27.707 Verification LBA range: start 0x0 length 0x400 00:22:27.707 Nvme10n1 : 1.19 377.40 23.59 0.00 0.00 147666.16 9112.62 100363.70 00:22:27.707 =================================================================================================================== 00:22:27.707 Total : 3743.42 233.96 0.00 0.00 157366.58 5929.45 243669.09 00:22:27.707 20:31:40 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@94 -- # stoptarget 00:22:27.707 20:31:40 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:22:27.707 20:31:40 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:22:27.707 20:31:40 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:27.707 20:31:40 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@45 -- # nvmftestfini 00:22:27.707 20:31:40 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:27.707 20:31:40 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # sync 00:22:27.707 20:31:40 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:22:27.707 20:31:40 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:22:27.707 20:31:40 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@120 -- # set +e 00:22:27.707 20:31:40 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:27.707 20:31:40 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:22:27.707 rmmod nvme_rdma 00:22:27.707 rmmod nvme_fabrics 00:22:27.707 20:31:40 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:27.707 20:31:40 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set -e 00:22:27.707 20:31:40 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # return 0 00:22:27.707 20:31:40 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@489 -- # '[' -n 3127843 ']' 00:22:27.707 20:31:40 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # killprocess 3127843 00:22:27.707 20:31:40 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@946 -- # '[' -z 3127843 ']' 00:22:27.707 20:31:40 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@950 -- # kill -0 3127843 00:22:27.707 20:31:40 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@951 -- # uname 00:22:27.707 20:31:40 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:27.707 20:31:40 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3127843 00:22:27.707 20:31:40 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:22:27.707 20:31:40 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:22:27.707 20:31:40 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3127843' 00:22:27.707 killing process with pid 3127843 00:22:27.707 20:31:40 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@965 -- # kill 3127843 00:22:27.707 [2024-05-16 20:31:40.689568] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:22:27.707 20:31:40 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@970 -- # wait 3127843 00:22:27.965 [2024-05-16 20:31:40.767177] rdma.c:2885:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:22:28.225 20:31:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:28.225 20:31:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:22:28.225 00:22:28.225 real 0m12.688s 00:22:28.225 user 0m30.588s 00:22:28.225 sys 0m5.479s 00:22:28.225 20:31:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:22:28.225 20:31:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:28.225 ************************************ 00:22:28.225 END TEST nvmf_shutdown_tc1 00:22:28.225 ************************************ 00:22:28.225 20:31:41 nvmf_rdma.nvmf_shutdown -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:22:28.225 20:31:41 nvmf_rdma.nvmf_shutdown -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:22:28.225 20:31:41 nvmf_rdma.nvmf_shutdown -- common/autotest_common.sh@1103 -- # xtrace_disable 00:22:28.225 20:31:41 nvmf_rdma.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:28.486 ************************************ 00:22:28.486 START TEST nvmf_shutdown_tc2 00:22:28.486 ************************************ 00:22:28.486 20:31:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1121 -- # nvmf_shutdown_tc2 00:22:28.486 20:31:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@99 -- # starttarget 00:22:28.486 20:31:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@15 -- # nvmftestinit 00:22:28.486 20:31:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:22:28.486 20:31:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:28.486 20:31:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:28.486 20:31:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:28.486 20:31:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:28.486 20:31:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:28.486 20:31:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:28.486 20:31:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:28.486 20:31:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:28.486 20:31:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:28.486 20:31:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@285 -- # xtrace_disable 00:22:28.486 20:31:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:28.486 20:31:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:28.486 20:31:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # pci_devs=() 00:22:28.486 20:31:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:28.486 20:31:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:28.486 20:31:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:28.486 20:31:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:28.486 20:31:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:28.486 20:31:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # net_devs=() 00:22:28.486 20:31:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:28.486 20:31:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # e810=() 00:22:28.486 20:31:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # local -ga e810 00:22:28.486 20:31:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # x722=() 00:22:28.486 20:31:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # local -ga x722 00:22:28.486 20:31:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # mlx=() 00:22:28.486 20:31:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # local -ga mlx 00:22:28.486 20:31:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:28.486 20:31:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:28.486 20:31:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:28.486 20:31:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:28.486 20:31:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:28.486 20:31:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:28.486 20:31:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:28.486 20:31:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:28.486 20:31:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:28.486 20:31:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:28.486 20:31:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:28.486 20:31:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:28.486 20:31:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:22:28.486 20:31:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:22:28.486 20:31:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:22:28.486 20:31:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:22:28.486 20:31:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:22:28.486 20:31:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:28.486 20:31:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:28.486 20:31:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:22:28.486 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:22:28.486 20:31:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:22:28.486 20:31:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:22:28.486 20:31:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:22:28.486 20:31:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:22:28.486 20:31:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:22:28.486 20:31:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:22:28.486 20:31:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:28.486 20:31:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:22:28.486 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:22:28.486 20:31:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:22:28.486 20:31:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:22:28.486 20:31:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:22:28.486 20:31:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:22:28.486 20:31:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:22:28.486 20:31:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:22:28.486 20:31:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:28.486 20:31:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:22:28.486 20:31:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:28.486 20:31:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:28.486 20:31:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:22:28.486 20:31:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:28.486 20:31:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:28.486 20:31:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:22:28.486 Found net devices under 0000:da:00.0: mlx_0_0 00:22:28.486 20:31:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:28.486 20:31:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:28.486 20:31:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:28.486 20:31:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:22:28.486 20:31:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:28.486 20:31:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:28.486 20:31:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:22:28.486 Found net devices under 0000:da:00.1: mlx_0_1 00:22:28.486 20:31:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:28.486 20:31:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:28.486 20:31:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # is_hw=yes 00:22:28.486 20:31:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:28.486 20:31:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:22:28.486 20:31:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:22:28.486 20:31:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@420 -- # rdma_device_init 00:22:28.486 20:31:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:22:28.486 20:31:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@58 -- # uname 00:22:28.486 20:31:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:22:28.486 20:31:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:22:28.486 20:31:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@63 -- # modprobe ib_core 00:22:28.486 20:31:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:22:28.486 20:31:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:22:28.487 20:31:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:22:28.487 20:31:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:22:28.487 20:31:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:22:28.487 20:31:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@502 -- # allocate_nic_ips 00:22:28.487 20:31:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:22:28.487 20:31:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:22:28.487 20:31:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:22:28.487 20:31:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:22:28.487 20:31:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:22:28.487 20:31:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:22:28.487 20:31:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:22:28.487 20:31:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:22:28.487 20:31:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:28.487 20:31:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:22:28.487 20:31:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:22:28.487 20:31:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@105 -- # continue 2 00:22:28.487 20:31:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:22:28.487 20:31:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:28.487 20:31:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:22:28.487 20:31:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:28.487 20:31:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:22:28.487 20:31:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:22:28.487 20:31:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@105 -- # continue 2 00:22:28.487 20:31:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:22:28.487 20:31:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:22:28.487 20:31:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:22:28.487 20:31:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:22:28.487 20:31:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:22:28.487 20:31:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:22:28.487 20:31:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:22:28.487 20:31:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:22:28.487 20:31:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:22:28.487 262: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:22:28.487 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:22:28.487 altname enp218s0f0np0 00:22:28.487 altname ens818f0np0 00:22:28.487 inet 192.168.100.8/24 scope global mlx_0_0 00:22:28.487 valid_lft forever preferred_lft forever 00:22:28.487 20:31:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:22:28.487 20:31:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:22:28.487 20:31:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:22:28.487 20:31:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:22:28.487 20:31:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:22:28.487 20:31:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:22:28.487 20:31:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:22:28.487 20:31:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:22:28.487 20:31:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:22:28.487 263: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:22:28.487 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:22:28.487 altname enp218s0f1np1 00:22:28.487 altname ens818f1np1 00:22:28.487 inet 192.168.100.9/24 scope global mlx_0_1 00:22:28.487 valid_lft forever preferred_lft forever 00:22:28.487 20:31:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # return 0 00:22:28.487 20:31:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:28.487 20:31:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:22:28.487 20:31:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:22:28.487 20:31:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:22:28.487 20:31:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:22:28.487 20:31:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:22:28.487 20:31:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:22:28.487 20:31:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:22:28.487 20:31:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:22:28.487 20:31:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:22:28.487 20:31:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:22:28.487 20:31:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:28.487 20:31:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:22:28.487 20:31:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:22:28.487 20:31:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@105 -- # continue 2 00:22:28.487 20:31:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:22:28.487 20:31:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:28.487 20:31:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:22:28.487 20:31:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:28.487 20:31:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:22:28.487 20:31:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:22:28.487 20:31:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@105 -- # continue 2 00:22:28.487 20:31:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:22:28.487 20:31:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:22:28.487 20:31:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:22:28.487 20:31:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:22:28.487 20:31:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:22:28.487 20:31:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:22:28.487 20:31:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:22:28.487 20:31:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:22:28.487 20:31:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:22:28.487 20:31:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:22:28.487 20:31:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:22:28.487 20:31:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:22:28.487 20:31:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:22:28.487 192.168.100.9' 00:22:28.487 20:31:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:22:28.487 192.168.100.9' 00:22:28.487 20:31:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@457 -- # head -n 1 00:22:28.487 20:31:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:22:28.487 20:31:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:22:28.487 192.168.100.9' 00:22:28.487 20:31:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@458 -- # tail -n +2 00:22:28.487 20:31:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@458 -- # head -n 1 00:22:28.487 20:31:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:22:28.487 20:31:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:22:28.487 20:31:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:22:28.487 20:31:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:22:28.487 20:31:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:22:28.487 20:31:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:22:28.487 20:31:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:22:28.487 20:31:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:28.487 20:31:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@720 -- # xtrace_disable 00:22:28.487 20:31:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:28.487 20:31:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # nvmfpid=3129125 00:22:28.487 20:31:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # waitforlisten 3129125 00:22:28.487 20:31:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:28.487 20:31:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@827 -- # '[' -z 3129125 ']' 00:22:28.487 20:31:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:28.487 20:31:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:28.487 20:31:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:28.487 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:28.487 20:31:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:28.487 20:31:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:28.747 [2024-05-16 20:31:41.495969] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:22:28.747 [2024-05-16 20:31:41.496019] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:28.747 EAL: No free 2048 kB hugepages reported on node 1 00:22:28.747 [2024-05-16 20:31:41.556646] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:28.747 [2024-05-16 20:31:41.637319] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:28.747 [2024-05-16 20:31:41.637354] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:28.747 [2024-05-16 20:31:41.637361] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:28.747 [2024-05-16 20:31:41.637367] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:28.747 [2024-05-16 20:31:41.637372] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:28.747 [2024-05-16 20:31:41.637476] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:28.747 [2024-05-16 20:31:41.637582] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:22:28.747 [2024-05-16 20:31:41.637690] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:28.747 [2024-05-16 20:31:41.637691] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:22:29.315 20:31:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:29.315 20:31:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # return 0 00:22:29.315 20:31:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:29.315 20:31:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:29.315 20:31:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:29.574 20:31:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:29.574 20:31:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:22:29.574 20:31:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:29.574 20:31:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:29.574 [2024-05-16 20:31:42.370309] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x15e7ca0/0x15ec190) succeed. 00:22:29.574 [2024-05-16 20:31:42.380485] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x15e92e0/0x162d820) succeed. 00:22:29.574 20:31:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:29.574 20:31:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:22:29.574 20:31:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:22:29.574 20:31:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@720 -- # xtrace_disable 00:22:29.574 20:31:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:29.574 20:31:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:29.574 20:31:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:29.574 20:31:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:22:29.574 20:31:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:29.574 20:31:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:22:29.574 20:31:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:29.574 20:31:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:22:29.575 20:31:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:29.575 20:31:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:22:29.575 20:31:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:29.575 20:31:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:22:29.575 20:31:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:29.575 20:31:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:22:29.575 20:31:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:29.575 20:31:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:22:29.575 20:31:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:29.575 20:31:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:22:29.575 20:31:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:29.575 20:31:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:22:29.575 20:31:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:29.575 20:31:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:22:29.575 20:31:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@35 -- # rpc_cmd 00:22:29.575 20:31:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:29.575 20:31:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:29.575 Malloc1 00:22:29.833 [2024-05-16 20:31:42.589348] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:22:29.833 [2024-05-16 20:31:42.589763] rdma.c:3032:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:22:29.833 Malloc2 00:22:29.833 Malloc3 00:22:29.833 Malloc4 00:22:29.833 Malloc5 00:22:29.833 Malloc6 00:22:30.093 Malloc7 00:22:30.093 Malloc8 00:22:30.093 Malloc9 00:22:30.093 Malloc10 00:22:30.093 20:31:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:30.093 20:31:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:22:30.093 20:31:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:30.093 20:31:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:30.093 20:31:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # perfpid=3129491 00:22:30.093 20:31:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # waitforlisten 3129491 /var/tmp/bdevperf.sock 00:22:30.093 20:31:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@827 -- # '[' -z 3129491 ']' 00:22:30.093 20:31:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:30.093 20:31:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:22:30.093 20:31:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:30.093 20:31:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:30.093 20:31:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:30.093 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:30.093 20:31:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # config=() 00:22:30.093 20:31:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:30.093 20:31:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # local subsystem config 00:22:30.093 20:31:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:30.093 20:31:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:30.094 20:31:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:30.094 { 00:22:30.094 "params": { 00:22:30.094 "name": "Nvme$subsystem", 00:22:30.094 "trtype": "$TEST_TRANSPORT", 00:22:30.094 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:30.094 "adrfam": "ipv4", 00:22:30.094 "trsvcid": "$NVMF_PORT", 00:22:30.094 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:30.094 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:30.094 "hdgst": ${hdgst:-false}, 00:22:30.094 "ddgst": ${ddgst:-false} 00:22:30.094 }, 00:22:30.094 "method": "bdev_nvme_attach_controller" 00:22:30.094 } 00:22:30.094 EOF 00:22:30.094 )") 00:22:30.094 20:31:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:22:30.094 20:31:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:30.094 20:31:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:30.094 { 00:22:30.094 "params": { 00:22:30.094 "name": "Nvme$subsystem", 00:22:30.094 "trtype": "$TEST_TRANSPORT", 00:22:30.094 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:30.094 "adrfam": "ipv4", 00:22:30.094 "trsvcid": "$NVMF_PORT", 00:22:30.094 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:30.094 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:30.094 "hdgst": ${hdgst:-false}, 00:22:30.094 "ddgst": ${ddgst:-false} 00:22:30.094 }, 00:22:30.094 "method": "bdev_nvme_attach_controller" 00:22:30.094 } 00:22:30.094 EOF 00:22:30.094 )") 00:22:30.094 20:31:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:22:30.094 20:31:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:30.094 20:31:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:30.094 { 00:22:30.094 "params": { 00:22:30.094 "name": "Nvme$subsystem", 00:22:30.094 "trtype": "$TEST_TRANSPORT", 00:22:30.094 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:30.094 "adrfam": "ipv4", 00:22:30.094 "trsvcid": "$NVMF_PORT", 00:22:30.094 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:30.094 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:30.094 "hdgst": ${hdgst:-false}, 00:22:30.094 "ddgst": ${ddgst:-false} 00:22:30.094 }, 00:22:30.094 "method": "bdev_nvme_attach_controller" 00:22:30.094 } 00:22:30.094 EOF 00:22:30.094 )") 00:22:30.094 20:31:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:22:30.094 20:31:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:30.094 20:31:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:30.094 { 00:22:30.094 "params": { 00:22:30.094 "name": "Nvme$subsystem", 00:22:30.094 "trtype": "$TEST_TRANSPORT", 00:22:30.094 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:30.094 "adrfam": "ipv4", 00:22:30.094 "trsvcid": "$NVMF_PORT", 00:22:30.094 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:30.094 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:30.094 "hdgst": ${hdgst:-false}, 00:22:30.094 "ddgst": ${ddgst:-false} 00:22:30.094 }, 00:22:30.094 "method": "bdev_nvme_attach_controller" 00:22:30.094 } 00:22:30.094 EOF 00:22:30.094 )") 00:22:30.094 20:31:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:22:30.094 20:31:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:30.094 20:31:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:30.094 { 00:22:30.094 "params": { 00:22:30.094 "name": "Nvme$subsystem", 00:22:30.094 "trtype": "$TEST_TRANSPORT", 00:22:30.094 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:30.094 "adrfam": "ipv4", 00:22:30.094 "trsvcid": "$NVMF_PORT", 00:22:30.094 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:30.094 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:30.094 "hdgst": ${hdgst:-false}, 00:22:30.094 "ddgst": ${ddgst:-false} 00:22:30.094 }, 00:22:30.094 "method": "bdev_nvme_attach_controller" 00:22:30.094 } 00:22:30.094 EOF 00:22:30.094 )") 00:22:30.094 20:31:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:22:30.094 20:31:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:30.094 20:31:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:30.094 { 00:22:30.094 "params": { 00:22:30.094 "name": "Nvme$subsystem", 00:22:30.094 "trtype": "$TEST_TRANSPORT", 00:22:30.094 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:30.094 "adrfam": "ipv4", 00:22:30.094 "trsvcid": "$NVMF_PORT", 00:22:30.094 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:30.094 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:30.094 "hdgst": ${hdgst:-false}, 00:22:30.094 "ddgst": ${ddgst:-false} 00:22:30.094 }, 00:22:30.094 "method": "bdev_nvme_attach_controller" 00:22:30.094 } 00:22:30.094 EOF 00:22:30.094 )") 00:22:30.094 20:31:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:22:30.094 [2024-05-16 20:31:43.063856] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:22:30.094 [2024-05-16 20:31:43.063905] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3129491 ] 00:22:30.094 20:31:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:30.094 20:31:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:30.094 { 00:22:30.094 "params": { 00:22:30.094 "name": "Nvme$subsystem", 00:22:30.094 "trtype": "$TEST_TRANSPORT", 00:22:30.094 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:30.094 "adrfam": "ipv4", 00:22:30.094 "trsvcid": "$NVMF_PORT", 00:22:30.094 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:30.094 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:30.094 "hdgst": ${hdgst:-false}, 00:22:30.094 "ddgst": ${ddgst:-false} 00:22:30.094 }, 00:22:30.094 "method": "bdev_nvme_attach_controller" 00:22:30.094 } 00:22:30.094 EOF 00:22:30.094 )") 00:22:30.094 20:31:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:22:30.094 20:31:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:30.094 20:31:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:30.094 { 00:22:30.094 "params": { 00:22:30.094 "name": "Nvme$subsystem", 00:22:30.094 "trtype": "$TEST_TRANSPORT", 00:22:30.094 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:30.094 "adrfam": "ipv4", 00:22:30.094 "trsvcid": "$NVMF_PORT", 00:22:30.094 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:30.094 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:30.094 "hdgst": ${hdgst:-false}, 00:22:30.094 "ddgst": ${ddgst:-false} 00:22:30.094 }, 00:22:30.094 "method": "bdev_nvme_attach_controller" 00:22:30.094 } 00:22:30.094 EOF 00:22:30.094 )") 00:22:30.094 20:31:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:22:30.094 20:31:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:30.094 20:31:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:30.094 { 00:22:30.094 "params": { 00:22:30.094 "name": "Nvme$subsystem", 00:22:30.094 "trtype": "$TEST_TRANSPORT", 00:22:30.094 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:30.094 "adrfam": "ipv4", 00:22:30.094 "trsvcid": "$NVMF_PORT", 00:22:30.094 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:30.094 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:30.094 "hdgst": ${hdgst:-false}, 00:22:30.094 "ddgst": ${ddgst:-false} 00:22:30.094 }, 00:22:30.094 "method": "bdev_nvme_attach_controller" 00:22:30.094 } 00:22:30.094 EOF 00:22:30.094 )") 00:22:30.094 20:31:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:22:30.354 20:31:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:30.354 20:31:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:30.354 { 00:22:30.354 "params": { 00:22:30.354 "name": "Nvme$subsystem", 00:22:30.354 "trtype": "$TEST_TRANSPORT", 00:22:30.354 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:30.354 "adrfam": "ipv4", 00:22:30.354 "trsvcid": "$NVMF_PORT", 00:22:30.354 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:30.354 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:30.354 "hdgst": ${hdgst:-false}, 00:22:30.354 "ddgst": ${ddgst:-false} 00:22:30.354 }, 00:22:30.354 "method": "bdev_nvme_attach_controller" 00:22:30.354 } 00:22:30.354 EOF 00:22:30.354 )") 00:22:30.354 20:31:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:22:30.354 EAL: No free 2048 kB hugepages reported on node 1 00:22:30.354 20:31:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # jq . 00:22:30.354 20:31:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@557 -- # IFS=, 00:22:30.354 20:31:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:22:30.354 "params": { 00:22:30.354 "name": "Nvme1", 00:22:30.354 "trtype": "rdma", 00:22:30.354 "traddr": "192.168.100.8", 00:22:30.354 "adrfam": "ipv4", 00:22:30.354 "trsvcid": "4420", 00:22:30.354 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:30.354 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:30.354 "hdgst": false, 00:22:30.354 "ddgst": false 00:22:30.354 }, 00:22:30.354 "method": "bdev_nvme_attach_controller" 00:22:30.354 },{ 00:22:30.354 "params": { 00:22:30.354 "name": "Nvme2", 00:22:30.354 "trtype": "rdma", 00:22:30.354 "traddr": "192.168.100.8", 00:22:30.354 "adrfam": "ipv4", 00:22:30.354 "trsvcid": "4420", 00:22:30.354 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:30.354 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:30.354 "hdgst": false, 00:22:30.354 "ddgst": false 00:22:30.354 }, 00:22:30.354 "method": "bdev_nvme_attach_controller" 00:22:30.354 },{ 00:22:30.354 "params": { 00:22:30.354 "name": "Nvme3", 00:22:30.354 "trtype": "rdma", 00:22:30.354 "traddr": "192.168.100.8", 00:22:30.354 "adrfam": "ipv4", 00:22:30.354 "trsvcid": "4420", 00:22:30.354 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:30.354 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:30.354 "hdgst": false, 00:22:30.354 "ddgst": false 00:22:30.354 }, 00:22:30.354 "method": "bdev_nvme_attach_controller" 00:22:30.354 },{ 00:22:30.354 "params": { 00:22:30.354 "name": "Nvme4", 00:22:30.354 "trtype": "rdma", 00:22:30.354 "traddr": "192.168.100.8", 00:22:30.354 "adrfam": "ipv4", 00:22:30.354 "trsvcid": "4420", 00:22:30.354 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:30.354 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:30.354 "hdgst": false, 00:22:30.354 "ddgst": false 00:22:30.354 }, 00:22:30.354 "method": "bdev_nvme_attach_controller" 00:22:30.354 },{ 00:22:30.354 "params": { 00:22:30.354 "name": "Nvme5", 00:22:30.354 "trtype": "rdma", 00:22:30.354 "traddr": "192.168.100.8", 00:22:30.354 "adrfam": "ipv4", 00:22:30.354 "trsvcid": "4420", 00:22:30.354 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:30.354 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:30.354 "hdgst": false, 00:22:30.354 "ddgst": false 00:22:30.354 }, 00:22:30.354 "method": "bdev_nvme_attach_controller" 00:22:30.354 },{ 00:22:30.354 "params": { 00:22:30.354 "name": "Nvme6", 00:22:30.354 "trtype": "rdma", 00:22:30.354 "traddr": "192.168.100.8", 00:22:30.354 "adrfam": "ipv4", 00:22:30.354 "trsvcid": "4420", 00:22:30.354 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:30.354 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:30.354 "hdgst": false, 00:22:30.354 "ddgst": false 00:22:30.354 }, 00:22:30.354 "method": "bdev_nvme_attach_controller" 00:22:30.354 },{ 00:22:30.354 "params": { 00:22:30.354 "name": "Nvme7", 00:22:30.354 "trtype": "rdma", 00:22:30.354 "traddr": "192.168.100.8", 00:22:30.354 "adrfam": "ipv4", 00:22:30.354 "trsvcid": "4420", 00:22:30.354 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:30.354 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:30.354 "hdgst": false, 00:22:30.354 "ddgst": false 00:22:30.354 }, 00:22:30.354 "method": "bdev_nvme_attach_controller" 00:22:30.354 },{ 00:22:30.354 "params": { 00:22:30.354 "name": "Nvme8", 00:22:30.354 "trtype": "rdma", 00:22:30.354 "traddr": "192.168.100.8", 00:22:30.354 "adrfam": "ipv4", 00:22:30.354 "trsvcid": "4420", 00:22:30.354 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:30.354 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:30.354 "hdgst": false, 00:22:30.354 "ddgst": false 00:22:30.354 }, 00:22:30.354 "method": "bdev_nvme_attach_controller" 00:22:30.354 },{ 00:22:30.354 "params": { 00:22:30.354 "name": "Nvme9", 00:22:30.354 "trtype": "rdma", 00:22:30.354 "traddr": "192.168.100.8", 00:22:30.354 "adrfam": "ipv4", 00:22:30.354 "trsvcid": "4420", 00:22:30.354 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:30.354 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:30.354 "hdgst": false, 00:22:30.354 "ddgst": false 00:22:30.354 }, 00:22:30.354 "method": "bdev_nvme_attach_controller" 00:22:30.354 },{ 00:22:30.354 "params": { 00:22:30.354 "name": "Nvme10", 00:22:30.354 "trtype": "rdma", 00:22:30.354 "traddr": "192.168.100.8", 00:22:30.354 "adrfam": "ipv4", 00:22:30.354 "trsvcid": "4420", 00:22:30.354 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:30.354 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:30.354 "hdgst": false, 00:22:30.354 "ddgst": false 00:22:30.354 }, 00:22:30.354 "method": "bdev_nvme_attach_controller" 00:22:30.354 }' 00:22:30.354 [2024-05-16 20:31:43.130235] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:30.354 [2024-05-16 20:31:43.204118] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:31.291 Running I/O for 10 seconds... 00:22:31.291 20:31:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:31.291 20:31:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # return 0 00:22:31.291 20:31:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:22:31.291 20:31:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:31.291 20:31:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:31.291 20:31:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:31.291 20:31:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:22:31.291 20:31:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:22:31.291 20:31:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:22:31.291 20:31:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@57 -- # local ret=1 00:22:31.291 20:31:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local i 00:22:31.291 20:31:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:22:31.291 20:31:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:22:31.291 20:31:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:31.291 20:31:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:22:31.291 20:31:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:31.291 20:31:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:31.550 20:31:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:31.550 20:31:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=3 00:22:31.550 20:31:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:22:31.550 20:31:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:22:31.809 20:31:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:22:31.809 20:31:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:22:31.809 20:31:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:31.809 20:31:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:22:31.809 20:31:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:31.809 20:31:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:31.809 20:31:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:31.809 20:31:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=153 00:22:31.809 20:31:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 153 -ge 100 ']' 00:22:31.809 20:31:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # ret=0 00:22:31.809 20:31:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # break 00:22:31.809 20:31:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@69 -- # return 0 00:22:31.809 20:31:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@110 -- # killprocess 3129491 00:22:31.809 20:31:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@946 -- # '[' -z 3129491 ']' 00:22:31.809 20:31:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # kill -0 3129491 00:22:31.809 20:31:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@951 -- # uname 00:22:31.809 20:31:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:31.809 20:31:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3129491 00:22:32.068 20:31:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:22:32.068 20:31:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:22:32.068 20:31:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3129491' 00:22:32.068 killing process with pid 3129491 00:22:32.068 20:31:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@965 -- # kill 3129491 00:22:32.068 20:31:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@970 -- # wait 3129491 00:22:32.068 Received shutdown signal, test time was about 0.816714 seconds 00:22:32.068 00:22:32.068 Latency(us) 00:22:32.068 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:32.068 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:32.068 Verification LBA range: start 0x0 length 0x400 00:22:32.068 Nvme1n1 : 0.80 345.78 21.61 0.00 0.00 180424.21 7895.53 206719.27 00:22:32.068 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:32.068 Verification LBA range: start 0x0 length 0x400 00:22:32.068 Nvme2n1 : 0.80 337.94 21.12 0.00 0.00 180525.75 7864.32 191739.61 00:22:32.068 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:32.068 Verification LBA range: start 0x0 length 0x400 00:22:32.068 Nvme3n1 : 0.81 357.31 22.33 0.00 0.00 167755.04 7989.15 184749.10 00:22:32.068 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:32.068 Verification LBA range: start 0x0 length 0x400 00:22:32.068 Nvme4n1 : 0.81 396.42 24.78 0.00 0.00 148185.92 5336.50 129823.70 00:22:32.068 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:32.068 Verification LBA range: start 0x0 length 0x400 00:22:32.068 Nvme5n1 : 0.81 377.23 23.58 0.00 0.00 152691.31 8738.13 163777.58 00:22:32.068 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:32.068 Verification LBA range: start 0x0 length 0x400 00:22:32.068 Nvme6n1 : 0.81 395.08 24.69 0.00 0.00 142925.63 9299.87 115343.36 00:22:32.068 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:32.068 Verification LBA range: start 0x0 length 0x400 00:22:32.068 Nvme7n1 : 0.81 394.48 24.66 0.00 0.00 139663.65 9674.36 110849.46 00:22:32.068 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:32.068 Verification LBA range: start 0x0 length 0x400 00:22:32.068 Nvme8n1 : 0.81 393.81 24.61 0.00 0.00 137102.48 10173.68 106355.57 00:22:32.068 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:32.068 Verification LBA range: start 0x0 length 0x400 00:22:32.068 Nvme9n1 : 0.81 393.00 24.56 0.00 0.00 134930.43 11047.50 95370.48 00:22:32.068 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:32.068 Verification LBA range: start 0x0 length 0x400 00:22:32.068 Nvme10n1 : 0.82 313.74 19.61 0.00 0.00 165070.63 8613.30 209715.20 00:22:32.068 =================================================================================================================== 00:22:32.068 Total : 3704.79 231.55 0.00 0.00 153818.58 5336.50 209715.20 00:22:32.327 20:31:45 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@113 -- # sleep 1 00:22:33.264 20:31:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # kill -0 3129125 00:22:33.264 20:31:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@116 -- # stoptarget 00:22:33.264 20:31:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:22:33.264 20:31:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:22:33.264 20:31:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:33.264 20:31:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@45 -- # nvmftestfini 00:22:33.264 20:31:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:33.264 20:31:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # sync 00:22:33.264 20:31:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:22:33.264 20:31:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:22:33.264 20:31:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@120 -- # set +e 00:22:33.264 20:31:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:33.264 20:31:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:22:33.264 rmmod nvme_rdma 00:22:33.264 rmmod nvme_fabrics 00:22:33.264 20:31:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:33.264 20:31:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set -e 00:22:33.264 20:31:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # return 0 00:22:33.264 20:31:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@489 -- # '[' -n 3129125 ']' 00:22:33.264 20:31:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # killprocess 3129125 00:22:33.264 20:31:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@946 -- # '[' -z 3129125 ']' 00:22:33.264 20:31:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # kill -0 3129125 00:22:33.264 20:31:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@951 -- # uname 00:22:33.264 20:31:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:33.264 20:31:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3129125 00:22:33.523 20:31:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:22:33.523 20:31:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:22:33.523 20:31:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3129125' 00:22:33.523 killing process with pid 3129125 00:22:33.523 20:31:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@965 -- # kill 3129125 00:22:33.523 [2024-05-16 20:31:46.262758] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:22:33.523 20:31:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@970 -- # wait 3129125 00:22:33.523 [2024-05-16 20:31:46.340796] rdma.c:2885:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:22:33.781 20:31:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:33.781 20:31:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:22:33.781 00:22:33.781 real 0m5.491s 00:22:33.781 user 0m22.233s 00:22:33.781 sys 0m1.027s 00:22:33.781 20:31:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:22:33.781 20:31:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:33.781 ************************************ 00:22:33.781 END TEST nvmf_shutdown_tc2 00:22:33.781 ************************************ 00:22:33.781 20:31:46 nvmf_rdma.nvmf_shutdown -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:22:33.781 20:31:46 nvmf_rdma.nvmf_shutdown -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:22:33.781 20:31:46 nvmf_rdma.nvmf_shutdown -- common/autotest_common.sh@1103 -- # xtrace_disable 00:22:33.781 20:31:46 nvmf_rdma.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:34.041 ************************************ 00:22:34.041 START TEST nvmf_shutdown_tc3 00:22:34.041 ************************************ 00:22:34.041 20:31:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1121 -- # nvmf_shutdown_tc3 00:22:34.041 20:31:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@121 -- # starttarget 00:22:34.041 20:31:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@15 -- # nvmftestinit 00:22:34.041 20:31:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:22:34.041 20:31:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:34.041 20:31:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:34.041 20:31:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:34.041 20:31:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:34.041 20:31:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:34.041 20:31:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:34.041 20:31:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:34.041 20:31:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:34.041 20:31:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:34.041 20:31:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@285 -- # xtrace_disable 00:22:34.041 20:31:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:34.041 20:31:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:34.041 20:31:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # pci_devs=() 00:22:34.041 20:31:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:34.042 20:31:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:34.042 20:31:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:34.042 20:31:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:34.042 20:31:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:34.042 20:31:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # net_devs=() 00:22:34.042 20:31:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:34.042 20:31:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # e810=() 00:22:34.042 20:31:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # local -ga e810 00:22:34.042 20:31:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # x722=() 00:22:34.042 20:31:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # local -ga x722 00:22:34.042 20:31:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # mlx=() 00:22:34.042 20:31:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # local -ga mlx 00:22:34.042 20:31:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:34.042 20:31:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:34.042 20:31:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:34.042 20:31:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:34.042 20:31:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:34.042 20:31:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:34.042 20:31:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:34.042 20:31:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:34.042 20:31:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:34.042 20:31:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:34.042 20:31:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:34.042 20:31:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:34.042 20:31:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:22:34.042 20:31:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:22:34.042 20:31:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:22:34.042 20:31:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:22:34.042 20:31:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:22:34.042 20:31:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:34.042 20:31:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:34.042 20:31:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:22:34.042 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:22:34.042 20:31:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:22:34.042 20:31:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:22:34.042 20:31:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:22:34.042 20:31:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:22:34.042 20:31:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:22:34.042 20:31:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:22:34.042 20:31:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:34.042 20:31:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:22:34.042 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:22:34.042 20:31:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:22:34.042 20:31:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:22:34.042 20:31:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:22:34.042 20:31:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:22:34.042 20:31:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:22:34.042 20:31:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:22:34.042 20:31:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:34.042 20:31:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:22:34.042 20:31:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:34.042 20:31:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:34.042 20:31:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:22:34.042 20:31:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:34.042 20:31:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:34.042 20:31:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:22:34.042 Found net devices under 0000:da:00.0: mlx_0_0 00:22:34.042 20:31:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:34.042 20:31:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:34.042 20:31:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:34.042 20:31:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:22:34.042 20:31:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:34.042 20:31:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:34.042 20:31:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:22:34.042 Found net devices under 0000:da:00.1: mlx_0_1 00:22:34.042 20:31:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:34.042 20:31:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:34.042 20:31:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # is_hw=yes 00:22:34.042 20:31:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:34.042 20:31:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:22:34.042 20:31:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:22:34.042 20:31:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@420 -- # rdma_device_init 00:22:34.042 20:31:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:22:34.042 20:31:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@58 -- # uname 00:22:34.042 20:31:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:22:34.042 20:31:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:22:34.042 20:31:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@63 -- # modprobe ib_core 00:22:34.042 20:31:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:22:34.042 20:31:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:22:34.042 20:31:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:22:34.042 20:31:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:22:34.042 20:31:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:22:34.042 20:31:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@502 -- # allocate_nic_ips 00:22:34.042 20:31:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:22:34.042 20:31:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:22:34.042 20:31:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:22:34.042 20:31:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:22:34.042 20:31:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:22:34.042 20:31:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:22:34.042 20:31:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:22:34.042 20:31:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:22:34.042 20:31:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:34.042 20:31:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:22:34.042 20:31:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:22:34.042 20:31:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@105 -- # continue 2 00:22:34.042 20:31:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:22:34.042 20:31:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:34.042 20:31:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:22:34.042 20:31:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:34.042 20:31:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:22:34.042 20:31:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:22:34.042 20:31:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@105 -- # continue 2 00:22:34.042 20:31:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:22:34.042 20:31:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:22:34.042 20:31:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:22:34.042 20:31:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:22:34.042 20:31:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:22:34.042 20:31:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:22:34.042 20:31:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:22:34.042 20:31:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:22:34.042 20:31:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:22:34.042 262: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:22:34.042 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:22:34.042 altname enp218s0f0np0 00:22:34.042 altname ens818f0np0 00:22:34.042 inet 192.168.100.8/24 scope global mlx_0_0 00:22:34.042 valid_lft forever preferred_lft forever 00:22:34.042 20:31:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:22:34.043 20:31:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:22:34.043 20:31:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:22:34.043 20:31:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:22:34.043 20:31:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:22:34.043 20:31:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:22:34.043 20:31:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:22:34.043 20:31:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:22:34.043 20:31:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:22:34.043 263: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:22:34.043 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:22:34.043 altname enp218s0f1np1 00:22:34.043 altname ens818f1np1 00:22:34.043 inet 192.168.100.9/24 scope global mlx_0_1 00:22:34.043 valid_lft forever preferred_lft forever 00:22:34.043 20:31:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # return 0 00:22:34.043 20:31:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:34.043 20:31:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:22:34.043 20:31:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:22:34.043 20:31:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:22:34.043 20:31:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:22:34.043 20:31:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:22:34.043 20:31:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:22:34.043 20:31:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:22:34.043 20:31:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:22:34.043 20:31:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:22:34.043 20:31:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:22:34.043 20:31:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:34.043 20:31:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:22:34.043 20:31:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:22:34.043 20:31:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@105 -- # continue 2 00:22:34.043 20:31:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:22:34.043 20:31:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:34.043 20:31:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:22:34.043 20:31:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:34.043 20:31:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:22:34.043 20:31:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:22:34.043 20:31:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@105 -- # continue 2 00:22:34.043 20:31:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:22:34.043 20:31:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:22:34.043 20:31:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:22:34.043 20:31:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:22:34.043 20:31:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:22:34.043 20:31:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:22:34.043 20:31:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:22:34.043 20:31:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:22:34.043 20:31:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:22:34.043 20:31:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:22:34.043 20:31:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:22:34.043 20:31:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:22:34.043 20:31:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:22:34.043 192.168.100.9' 00:22:34.043 20:31:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:22:34.043 192.168.100.9' 00:22:34.043 20:31:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@457 -- # head -n 1 00:22:34.043 20:31:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:22:34.043 20:31:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:22:34.043 192.168.100.9' 00:22:34.043 20:31:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@458 -- # head -n 1 00:22:34.043 20:31:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@458 -- # tail -n +2 00:22:34.043 20:31:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:22:34.043 20:31:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:22:34.043 20:31:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:22:34.043 20:31:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:22:34.043 20:31:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:22:34.043 20:31:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:22:34.043 20:31:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:22:34.043 20:31:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:34.043 20:31:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@720 -- # xtrace_disable 00:22:34.043 20:31:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:34.043 20:31:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # nvmfpid=3130193 00:22:34.043 20:31:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # waitforlisten 3130193 00:22:34.043 20:31:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:34.043 20:31:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@827 -- # '[' -z 3130193 ']' 00:22:34.043 20:31:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:34.043 20:31:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:34.043 20:31:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:34.043 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:34.043 20:31:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:34.043 20:31:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:34.043 [2024-05-16 20:31:47.031777] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:22:34.043 [2024-05-16 20:31:47.031821] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:34.302 EAL: No free 2048 kB hugepages reported on node 1 00:22:34.302 [2024-05-16 20:31:47.092779] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:34.302 [2024-05-16 20:31:47.172482] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:34.302 [2024-05-16 20:31:47.172519] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:34.302 [2024-05-16 20:31:47.172526] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:34.302 [2024-05-16 20:31:47.172532] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:34.302 [2024-05-16 20:31:47.172537] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:34.302 [2024-05-16 20:31:47.172633] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:34.302 [2024-05-16 20:31:47.172740] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:22:34.302 [2024-05-16 20:31:47.172847] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:34.302 [2024-05-16 20:31:47.172849] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:22:34.870 20:31:47 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:34.870 20:31:47 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # return 0 00:22:34.870 20:31:47 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:34.870 20:31:47 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:34.870 20:31:47 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:35.129 20:31:47 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:35.129 20:31:47 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:22:35.129 20:31:47 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:35.129 20:31:47 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:35.129 [2024-05-16 20:31:47.903443] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xfd3ca0/0xfd8190) succeed. 00:22:35.129 [2024-05-16 20:31:47.913642] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xfd52e0/0x1019820) succeed. 00:22:35.129 20:31:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:35.129 20:31:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:22:35.129 20:31:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:22:35.129 20:31:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@720 -- # xtrace_disable 00:22:35.129 20:31:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:35.129 20:31:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:35.129 20:31:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:35.129 20:31:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:22:35.129 20:31:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:35.129 20:31:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:22:35.129 20:31:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:35.129 20:31:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:22:35.129 20:31:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:35.129 20:31:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:22:35.129 20:31:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:35.129 20:31:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:22:35.129 20:31:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:35.129 20:31:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:22:35.129 20:31:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:35.129 20:31:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:22:35.129 20:31:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:35.129 20:31:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:22:35.129 20:31:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:35.129 20:31:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:22:35.129 20:31:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:35.129 20:31:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:22:35.129 20:31:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@35 -- # rpc_cmd 00:22:35.129 20:31:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:35.129 20:31:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:35.129 Malloc1 00:22:35.388 [2024-05-16 20:31:48.123907] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:22:35.388 [2024-05-16 20:31:48.124297] rdma.c:3032:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:22:35.388 Malloc2 00:22:35.388 Malloc3 00:22:35.388 Malloc4 00:22:35.388 Malloc5 00:22:35.388 Malloc6 00:22:35.388 Malloc7 00:22:35.647 Malloc8 00:22:35.647 Malloc9 00:22:35.647 Malloc10 00:22:35.647 20:31:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:35.647 20:31:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:22:35.647 20:31:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:35.647 20:31:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:35.647 20:31:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # perfpid=3130487 00:22:35.647 20:31:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # waitforlisten 3130487 /var/tmp/bdevperf.sock 00:22:35.647 20:31:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@827 -- # '[' -z 3130487 ']' 00:22:35.647 20:31:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:35.647 20:31:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:22:35.648 20:31:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:35.648 20:31:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:35.648 20:31:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:35.648 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:35.648 20:31:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # config=() 00:22:35.648 20:31:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:35.648 20:31:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # local subsystem config 00:22:35.648 20:31:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:35.648 20:31:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:35.648 20:31:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:35.648 { 00:22:35.648 "params": { 00:22:35.648 "name": "Nvme$subsystem", 00:22:35.648 "trtype": "$TEST_TRANSPORT", 00:22:35.648 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:35.648 "adrfam": "ipv4", 00:22:35.648 "trsvcid": "$NVMF_PORT", 00:22:35.648 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:35.648 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:35.648 "hdgst": ${hdgst:-false}, 00:22:35.648 "ddgst": ${ddgst:-false} 00:22:35.648 }, 00:22:35.648 "method": "bdev_nvme_attach_controller" 00:22:35.648 } 00:22:35.648 EOF 00:22:35.648 )") 00:22:35.648 20:31:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:22:35.648 20:31:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:35.648 20:31:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:35.648 { 00:22:35.648 "params": { 00:22:35.648 "name": "Nvme$subsystem", 00:22:35.648 "trtype": "$TEST_TRANSPORT", 00:22:35.648 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:35.648 "adrfam": "ipv4", 00:22:35.648 "trsvcid": "$NVMF_PORT", 00:22:35.648 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:35.648 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:35.648 "hdgst": ${hdgst:-false}, 00:22:35.648 "ddgst": ${ddgst:-false} 00:22:35.648 }, 00:22:35.648 "method": "bdev_nvme_attach_controller" 00:22:35.648 } 00:22:35.648 EOF 00:22:35.648 )") 00:22:35.648 20:31:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:22:35.648 20:31:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:35.648 20:31:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:35.648 { 00:22:35.648 "params": { 00:22:35.648 "name": "Nvme$subsystem", 00:22:35.648 "trtype": "$TEST_TRANSPORT", 00:22:35.648 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:35.648 "adrfam": "ipv4", 00:22:35.648 "trsvcid": "$NVMF_PORT", 00:22:35.648 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:35.648 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:35.648 "hdgst": ${hdgst:-false}, 00:22:35.648 "ddgst": ${ddgst:-false} 00:22:35.648 }, 00:22:35.648 "method": "bdev_nvme_attach_controller" 00:22:35.648 } 00:22:35.648 EOF 00:22:35.648 )") 00:22:35.648 20:31:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:22:35.648 20:31:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:35.648 20:31:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:35.648 { 00:22:35.648 "params": { 00:22:35.648 "name": "Nvme$subsystem", 00:22:35.648 "trtype": "$TEST_TRANSPORT", 00:22:35.648 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:35.648 "adrfam": "ipv4", 00:22:35.648 "trsvcid": "$NVMF_PORT", 00:22:35.648 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:35.648 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:35.648 "hdgst": ${hdgst:-false}, 00:22:35.648 "ddgst": ${ddgst:-false} 00:22:35.648 }, 00:22:35.648 "method": "bdev_nvme_attach_controller" 00:22:35.648 } 00:22:35.648 EOF 00:22:35.648 )") 00:22:35.648 20:31:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:22:35.648 20:31:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:35.648 20:31:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:35.648 { 00:22:35.648 "params": { 00:22:35.648 "name": "Nvme$subsystem", 00:22:35.648 "trtype": "$TEST_TRANSPORT", 00:22:35.648 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:35.648 "adrfam": "ipv4", 00:22:35.648 "trsvcid": "$NVMF_PORT", 00:22:35.648 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:35.648 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:35.648 "hdgst": ${hdgst:-false}, 00:22:35.648 "ddgst": ${ddgst:-false} 00:22:35.648 }, 00:22:35.648 "method": "bdev_nvme_attach_controller" 00:22:35.648 } 00:22:35.648 EOF 00:22:35.648 )") 00:22:35.648 20:31:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:22:35.648 20:31:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:35.648 20:31:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:35.648 { 00:22:35.648 "params": { 00:22:35.648 "name": "Nvme$subsystem", 00:22:35.648 "trtype": "$TEST_TRANSPORT", 00:22:35.648 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:35.648 "adrfam": "ipv4", 00:22:35.648 "trsvcid": "$NVMF_PORT", 00:22:35.648 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:35.648 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:35.648 "hdgst": ${hdgst:-false}, 00:22:35.648 "ddgst": ${ddgst:-false} 00:22:35.648 }, 00:22:35.648 "method": "bdev_nvme_attach_controller" 00:22:35.648 } 00:22:35.648 EOF 00:22:35.648 )") 00:22:35.648 20:31:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:22:35.648 20:31:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:35.648 [2024-05-16 20:31:48.595893] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:22:35.648 20:31:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:35.648 { 00:22:35.648 "params": { 00:22:35.648 "name": "Nvme$subsystem", 00:22:35.648 "trtype": "$TEST_TRANSPORT", 00:22:35.648 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:35.648 "adrfam": "ipv4", 00:22:35.648 "trsvcid": "$NVMF_PORT", 00:22:35.648 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:35.648 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:35.648 "hdgst": ${hdgst:-false}, 00:22:35.648 "ddgst": ${ddgst:-false} 00:22:35.648 }, 00:22:35.648 "method": "bdev_nvme_attach_controller" 00:22:35.648 } 00:22:35.648 EOF 00:22:35.648 )") 00:22:35.648 [2024-05-16 20:31:48.595941] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3130487 ] 00:22:35.648 20:31:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:22:35.648 20:31:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:35.648 20:31:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:35.648 { 00:22:35.648 "params": { 00:22:35.648 "name": "Nvme$subsystem", 00:22:35.648 "trtype": "$TEST_TRANSPORT", 00:22:35.648 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:35.648 "adrfam": "ipv4", 00:22:35.648 "trsvcid": "$NVMF_PORT", 00:22:35.648 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:35.648 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:35.648 "hdgst": ${hdgst:-false}, 00:22:35.648 "ddgst": ${ddgst:-false} 00:22:35.648 }, 00:22:35.648 "method": "bdev_nvme_attach_controller" 00:22:35.648 } 00:22:35.648 EOF 00:22:35.648 )") 00:22:35.648 20:31:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:22:35.648 20:31:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:35.648 20:31:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:35.648 { 00:22:35.648 "params": { 00:22:35.648 "name": "Nvme$subsystem", 00:22:35.648 "trtype": "$TEST_TRANSPORT", 00:22:35.648 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:35.648 "adrfam": "ipv4", 00:22:35.648 "trsvcid": "$NVMF_PORT", 00:22:35.648 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:35.648 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:35.648 "hdgst": ${hdgst:-false}, 00:22:35.648 "ddgst": ${ddgst:-false} 00:22:35.648 }, 00:22:35.648 "method": "bdev_nvme_attach_controller" 00:22:35.648 } 00:22:35.648 EOF 00:22:35.648 )") 00:22:35.648 20:31:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:22:35.648 20:31:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:35.648 20:31:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:35.648 { 00:22:35.648 "params": { 00:22:35.648 "name": "Nvme$subsystem", 00:22:35.648 "trtype": "$TEST_TRANSPORT", 00:22:35.648 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:35.648 "adrfam": "ipv4", 00:22:35.648 "trsvcid": "$NVMF_PORT", 00:22:35.648 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:35.648 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:35.648 "hdgst": ${hdgst:-false}, 00:22:35.648 "ddgst": ${ddgst:-false} 00:22:35.648 }, 00:22:35.648 "method": "bdev_nvme_attach_controller" 00:22:35.648 } 00:22:35.648 EOF 00:22:35.648 )") 00:22:35.648 20:31:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:22:35.648 20:31:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # jq . 00:22:35.648 EAL: No free 2048 kB hugepages reported on node 1 00:22:35.648 20:31:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@557 -- # IFS=, 00:22:35.648 20:31:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:22:35.648 "params": { 00:22:35.648 "name": "Nvme1", 00:22:35.648 "trtype": "rdma", 00:22:35.649 "traddr": "192.168.100.8", 00:22:35.649 "adrfam": "ipv4", 00:22:35.649 "trsvcid": "4420", 00:22:35.649 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:35.649 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:35.649 "hdgst": false, 00:22:35.649 "ddgst": false 00:22:35.649 }, 00:22:35.649 "method": "bdev_nvme_attach_controller" 00:22:35.649 },{ 00:22:35.649 "params": { 00:22:35.649 "name": "Nvme2", 00:22:35.649 "trtype": "rdma", 00:22:35.649 "traddr": "192.168.100.8", 00:22:35.649 "adrfam": "ipv4", 00:22:35.649 "trsvcid": "4420", 00:22:35.649 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:35.649 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:35.649 "hdgst": false, 00:22:35.649 "ddgst": false 00:22:35.649 }, 00:22:35.649 "method": "bdev_nvme_attach_controller" 00:22:35.649 },{ 00:22:35.649 "params": { 00:22:35.649 "name": "Nvme3", 00:22:35.649 "trtype": "rdma", 00:22:35.649 "traddr": "192.168.100.8", 00:22:35.649 "adrfam": "ipv4", 00:22:35.649 "trsvcid": "4420", 00:22:35.649 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:35.649 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:35.649 "hdgst": false, 00:22:35.649 "ddgst": false 00:22:35.649 }, 00:22:35.649 "method": "bdev_nvme_attach_controller" 00:22:35.649 },{ 00:22:35.649 "params": { 00:22:35.649 "name": "Nvme4", 00:22:35.649 "trtype": "rdma", 00:22:35.649 "traddr": "192.168.100.8", 00:22:35.649 "adrfam": "ipv4", 00:22:35.649 "trsvcid": "4420", 00:22:35.649 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:35.649 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:35.649 "hdgst": false, 00:22:35.649 "ddgst": false 00:22:35.649 }, 00:22:35.649 "method": "bdev_nvme_attach_controller" 00:22:35.649 },{ 00:22:35.649 "params": { 00:22:35.649 "name": "Nvme5", 00:22:35.649 "trtype": "rdma", 00:22:35.649 "traddr": "192.168.100.8", 00:22:35.649 "adrfam": "ipv4", 00:22:35.649 "trsvcid": "4420", 00:22:35.649 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:35.649 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:35.649 "hdgst": false, 00:22:35.649 "ddgst": false 00:22:35.649 }, 00:22:35.649 "method": "bdev_nvme_attach_controller" 00:22:35.649 },{ 00:22:35.649 "params": { 00:22:35.649 "name": "Nvme6", 00:22:35.649 "trtype": "rdma", 00:22:35.649 "traddr": "192.168.100.8", 00:22:35.649 "adrfam": "ipv4", 00:22:35.649 "trsvcid": "4420", 00:22:35.649 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:35.649 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:35.649 "hdgst": false, 00:22:35.649 "ddgst": false 00:22:35.649 }, 00:22:35.649 "method": "bdev_nvme_attach_controller" 00:22:35.649 },{ 00:22:35.649 "params": { 00:22:35.649 "name": "Nvme7", 00:22:35.649 "trtype": "rdma", 00:22:35.649 "traddr": "192.168.100.8", 00:22:35.649 "adrfam": "ipv4", 00:22:35.649 "trsvcid": "4420", 00:22:35.649 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:35.649 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:35.649 "hdgst": false, 00:22:35.649 "ddgst": false 00:22:35.649 }, 00:22:35.649 "method": "bdev_nvme_attach_controller" 00:22:35.649 },{ 00:22:35.649 "params": { 00:22:35.649 "name": "Nvme8", 00:22:35.649 "trtype": "rdma", 00:22:35.649 "traddr": "192.168.100.8", 00:22:35.649 "adrfam": "ipv4", 00:22:35.649 "trsvcid": "4420", 00:22:35.649 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:35.649 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:35.649 "hdgst": false, 00:22:35.649 "ddgst": false 00:22:35.649 }, 00:22:35.649 "method": "bdev_nvme_attach_controller" 00:22:35.649 },{ 00:22:35.649 "params": { 00:22:35.649 "name": "Nvme9", 00:22:35.649 "trtype": "rdma", 00:22:35.649 "traddr": "192.168.100.8", 00:22:35.649 "adrfam": "ipv4", 00:22:35.649 "trsvcid": "4420", 00:22:35.649 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:35.649 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:35.649 "hdgst": false, 00:22:35.649 "ddgst": false 00:22:35.649 }, 00:22:35.649 "method": "bdev_nvme_attach_controller" 00:22:35.649 },{ 00:22:35.649 "params": { 00:22:35.649 "name": "Nvme10", 00:22:35.649 "trtype": "rdma", 00:22:35.649 "traddr": "192.168.100.8", 00:22:35.649 "adrfam": "ipv4", 00:22:35.649 "trsvcid": "4420", 00:22:35.649 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:35.649 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:35.649 "hdgst": false, 00:22:35.649 "ddgst": false 00:22:35.649 }, 00:22:35.649 "method": "bdev_nvme_attach_controller" 00:22:35.649 }' 00:22:35.907 [2024-05-16 20:31:48.659548] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:35.907 [2024-05-16 20:31:48.733167] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:36.843 Running I/O for 10 seconds... 00:22:36.843 20:31:49 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:36.843 20:31:49 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # return 0 00:22:36.843 20:31:49 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:22:36.843 20:31:49 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:36.843 20:31:49 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:36.843 20:31:49 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:36.843 20:31:49 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:36.843 20:31:49 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:22:36.843 20:31:49 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:22:36.843 20:31:49 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:22:36.843 20:31:49 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@57 -- # local ret=1 00:22:36.843 20:31:49 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local i 00:22:36.843 20:31:49 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:22:36.843 20:31:49 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:22:36.843 20:31:49 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:36.843 20:31:49 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:22:36.843 20:31:49 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:36.843 20:31:49 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:37.101 20:31:49 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:37.101 20:31:49 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=3 00:22:37.101 20:31:49 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:22:37.101 20:31:49 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:22:37.357 20:31:50 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:22:37.357 20:31:50 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:22:37.357 20:31:50 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:37.357 20:31:50 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:22:37.357 20:31:50 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:37.357 20:31:50 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:37.357 20:31:50 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:37.357 20:31:50 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=147 00:22:37.357 20:31:50 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 147 -ge 100 ']' 00:22:37.357 20:31:50 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # ret=0 00:22:37.357 20:31:50 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # break 00:22:37.357 20:31:50 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@69 -- # return 0 00:22:37.357 20:31:50 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@135 -- # killprocess 3130193 00:22:37.357 20:31:50 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@946 -- # '[' -z 3130193 ']' 00:22:37.357 20:31:50 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@950 -- # kill -0 3130193 00:22:37.358 20:31:50 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@951 -- # uname 00:22:37.358 20:31:50 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:37.358 20:31:50 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3130193 00:22:37.358 20:31:50 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:22:37.358 20:31:50 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:22:37.358 20:31:50 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3130193' 00:22:37.358 killing process with pid 3130193 00:22:37.615 20:31:50 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@965 -- # kill 3130193 00:22:37.615 [2024-05-16 20:31:50.350943] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:22:37.615 20:31:50 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@970 -- # wait 3130193 00:22:37.615 [2024-05-16 20:31:50.463967] rdma.c:2885:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:22:37.874 20:31:50 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # nvmfpid= 00:22:37.874 20:31:50 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@139 -- # sleep 1 00:22:38.476 [2024-05-16 20:31:51.412192] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200019256900 was disconnected and freed. reset controller. 00:22:38.476 [2024-05-16 20:31:51.413993] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200019256680 was disconnected and freed. reset controller. 00:22:38.476 [2024-05-16 20:31:51.416489] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200019256400 was disconnected and freed. reset controller. 00:22:38.477 [2024-05-16 20:31:51.418873] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200019256180 was disconnected and freed. reset controller. 00:22:38.477 [2024-05-16 20:31:51.421454] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x20001b60ee80 was disconnected and freed. reset controller. 00:22:38.477 [2024-05-16 20:31:51.423932] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x20001b806bc0 was disconnected and freed. reset controller. 00:22:38.477 [2024-05-16 20:31:51.426409] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x20001b806940 was disconnected and freed. reset controller. 00:22:38.477 [2024-05-16 20:31:51.426463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:37632 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001acaf600 len:0x10000 key:0x183f00 00:22:38.477 [2024-05-16 20:31:51.426479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1d44760 sqhd:e080 p:0 m:0 dnr:0 00:22:38.477 [2024-05-16 20:31:51.426503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:37760 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ac9f580 len:0x10000 key:0x183f00 00:22:38.477 [2024-05-16 20:31:51.426515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1d44760 sqhd:e080 p:0 m:0 dnr:0 00:22:38.477 [2024-05-16 20:31:51.426532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:37888 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ac8f500 len:0x10000 key:0x183f00 00:22:38.477 [2024-05-16 20:31:51.426543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1d44760 sqhd:e080 p:0 m:0 dnr:0 00:22:38.477 [2024-05-16 20:31:51.426559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:38016 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ac7f480 len:0x10000 key:0x183f00 00:22:38.477 [2024-05-16 20:31:51.426570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1d44760 sqhd:e080 p:0 m:0 dnr:0 00:22:38.477 [2024-05-16 20:31:51.426586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:38144 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ac6f400 len:0x10000 key:0x183f00 00:22:38.477 [2024-05-16 20:31:51.426597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1d44760 sqhd:e080 p:0 m:0 dnr:0 00:22:38.477 [2024-05-16 20:31:51.426613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:38272 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ac5f380 len:0x10000 key:0x183f00 00:22:38.477 [2024-05-16 20:31:51.426623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1d44760 sqhd:e080 p:0 m:0 dnr:0 00:22:38.477 [2024-05-16 20:31:51.426639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:38400 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ac4f300 len:0x10000 key:0x183f00 00:22:38.477 [2024-05-16 20:31:51.426650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1d44760 sqhd:e080 p:0 m:0 dnr:0 00:22:38.477 [2024-05-16 20:31:51.426666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:38528 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ac3f280 len:0x10000 key:0x183f00 00:22:38.477 [2024-05-16 20:31:51.426677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1d44760 sqhd:e080 p:0 m:0 dnr:0 00:22:38.477 [2024-05-16 20:31:51.426693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:38656 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ac2f200 len:0x10000 key:0x183f00 00:22:38.477 [2024-05-16 20:31:51.426704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1d44760 sqhd:e080 p:0 m:0 dnr:0 00:22:38.477 [2024-05-16 20:31:51.426720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:38784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ac1f180 len:0x10000 key:0x183f00 00:22:38.477 [2024-05-16 20:31:51.426731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1d44760 sqhd:e080 p:0 m:0 dnr:0 00:22:38.477 [2024-05-16 20:31:51.426747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:38912 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ac0f100 len:0x10000 key:0x183f00 00:22:38.477 [2024-05-16 20:31:51.426762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1d44760 sqhd:e080 p:0 m:0 dnr:0 00:22:38.477 [2024-05-16 20:31:51.426778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:39040 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001aff0000 len:0x10000 key:0x183c00 00:22:38.477 [2024-05-16 20:31:51.426789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1d44760 sqhd:e080 p:0 m:0 dnr:0 00:22:38.477 [2024-05-16 20:31:51.426805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:39168 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001afdff80 len:0x10000 key:0x183c00 00:22:38.477 [2024-05-16 20:31:51.426815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1d44760 sqhd:e080 p:0 m:0 dnr:0 00:22:38.477 [2024-05-16 20:31:51.426832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:39296 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001afcff00 len:0x10000 key:0x183c00 00:22:38.477 [2024-05-16 20:31:51.426842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1d44760 sqhd:e080 p:0 m:0 dnr:0 00:22:38.477 [2024-05-16 20:31:51.426859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:39424 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001afbfe80 len:0x10000 key:0x183c00 00:22:38.477 [2024-05-16 20:31:51.426869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1d44760 sqhd:e080 p:0 m:0 dnr:0 00:22:38.477 [2024-05-16 20:31:51.426885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:39552 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001afafe00 len:0x10000 key:0x183c00 00:22:38.477 [2024-05-16 20:31:51.426895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1d44760 sqhd:e080 p:0 m:0 dnr:0 00:22:38.477 [2024-05-16 20:31:51.426912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:39680 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001af9fd80 len:0x10000 key:0x183c00 00:22:38.477 [2024-05-16 20:31:51.426922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1d44760 sqhd:e080 p:0 m:0 dnr:0 00:22:38.477 [2024-05-16 20:31:51.426938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:39808 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001af8fd00 len:0x10000 key:0x183c00 00:22:38.477 [2024-05-16 20:31:51.426949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1d44760 sqhd:e080 p:0 m:0 dnr:0 00:22:38.477 [2024-05-16 20:31:51.426965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:39936 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001af7fc80 len:0x10000 key:0x183c00 00:22:38.477 [2024-05-16 20:31:51.426975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1d44760 sqhd:e080 p:0 m:0 dnr:0 00:22:38.477 [2024-05-16 20:31:51.426991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:40064 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001af6fc00 len:0x10000 key:0x183c00 00:22:38.477 [2024-05-16 20:31:51.427003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1d44760 sqhd:e080 p:0 m:0 dnr:0 00:22:38.477 [2024-05-16 20:31:51.427019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:40192 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001af5fb80 len:0x10000 key:0x183c00 00:22:38.477 [2024-05-16 20:31:51.427029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1d44760 sqhd:e080 p:0 m:0 dnr:0 00:22:38.477 [2024-05-16 20:31:51.427061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:40320 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001af4fb00 len:0x10000 key:0x183c00 00:22:38.477 [2024-05-16 20:31:51.427074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1d44760 sqhd:e080 p:0 m:0 dnr:0 00:22:38.477 [2024-05-16 20:31:51.427090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:40448 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001af3fa80 len:0x10000 key:0x183c00 00:22:38.477 [2024-05-16 20:31:51.427100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1d44760 sqhd:e080 p:0 m:0 dnr:0 00:22:38.477 [2024-05-16 20:31:51.427116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:40576 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001af2fa00 len:0x10000 key:0x183c00 00:22:38.477 [2024-05-16 20:31:51.427127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1d44760 sqhd:e080 p:0 m:0 dnr:0 00:22:38.477 [2024-05-16 20:31:51.427143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:40704 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001af1f980 len:0x10000 key:0x183c00 00:22:38.477 [2024-05-16 20:31:51.427153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1d44760 sqhd:e080 p:0 m:0 dnr:0 00:22:38.477 [2024-05-16 20:31:51.427169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:40832 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001aaefe00 len:0x10000 key:0x183800 00:22:38.477 [2024-05-16 20:31:51.427180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1d44760 sqhd:e080 p:0 m:0 dnr:0 00:22:38.477 [2024-05-16 20:31:51.429274] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x20001b8066c0 was disconnected and freed. reset controller. 00:22:38.477 [2024-05-16 20:31:51.429460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:32768 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b1f0000 len:0x10000 key:0x183600 00:22:38.477 [2024-05-16 20:31:51.429474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1d2f750 sqhd:2b00 p:0 m:0 dnr:0 00:22:38.477 [2024-05-16 20:31:51.429489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32896 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b1dff80 len:0x10000 key:0x183600 00:22:38.477 [2024-05-16 20:31:51.429501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1d2f750 sqhd:2b00 p:0 m:0 dnr:0 00:22:38.477 [2024-05-16 20:31:51.429514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:33024 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b1cff00 len:0x10000 key:0x183600 00:22:38.477 [2024-05-16 20:31:51.429525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1d2f750 sqhd:2b00 p:0 m:0 dnr:0 00:22:38.477 [2024-05-16 20:31:51.429538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:33152 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b1bfe80 len:0x10000 key:0x183600 00:22:38.477 [2024-05-16 20:31:51.429548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1d2f750 sqhd:2b00 p:0 m:0 dnr:0 00:22:38.477 [2024-05-16 20:31:51.429561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:33280 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b1afe00 len:0x10000 key:0x183600 00:22:38.477 [2024-05-16 20:31:51.429571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1d2f750 sqhd:2b00 p:0 m:0 dnr:0 00:22:38.477 [2024-05-16 20:31:51.429584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:33408 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b19fd80 len:0x10000 key:0x183600 00:22:38.477 [2024-05-16 20:31:51.429594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1d2f750 sqhd:2b00 p:0 m:0 dnr:0 00:22:38.477 [2024-05-16 20:31:51.429607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:33536 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b18fd00 len:0x10000 key:0x183600 00:22:38.477 [2024-05-16 20:31:51.429620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1d2f750 sqhd:2b00 p:0 m:0 dnr:0 00:22:38.477 [2024-05-16 20:31:51.429633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:33664 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b17fc80 len:0x10000 key:0x183600 00:22:38.478 [2024-05-16 20:31:51.429644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1d2f750 sqhd:2b00 p:0 m:0 dnr:0 00:22:38.478 [2024-05-16 20:31:51.429657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33792 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b16fc00 len:0x10000 key:0x183600 00:22:38.478 [2024-05-16 20:31:51.429667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1d2f750 sqhd:2b00 p:0 m:0 dnr:0 00:22:38.478 [2024-05-16 20:31:51.429680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:33920 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b15fb80 len:0x10000 key:0x183600 00:22:38.478 [2024-05-16 20:31:51.429690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1d2f750 sqhd:2b00 p:0 m:0 dnr:0 00:22:38.478 [2024-05-16 20:31:51.429702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:34048 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b14fb00 len:0x10000 key:0x183600 00:22:38.478 [2024-05-16 20:31:51.429713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1d2f750 sqhd:2b00 p:0 m:0 dnr:0 00:22:38.478 [2024-05-16 20:31:51.429725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:34176 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b13fa80 len:0x10000 key:0x183600 00:22:38.478 [2024-05-16 20:31:51.429736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1d2f750 sqhd:2b00 p:0 m:0 dnr:0 00:22:38.478 [2024-05-16 20:31:51.429748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:34304 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b12fa00 len:0x10000 key:0x183600 00:22:38.478 [2024-05-16 20:31:51.429758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1d2f750 sqhd:2b00 p:0 m:0 dnr:0 00:22:38.478 [2024-05-16 20:31:51.429770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:34432 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b11f980 len:0x10000 key:0x183600 00:22:38.478 [2024-05-16 20:31:51.429781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1d2f750 sqhd:2b00 p:0 m:0 dnr:0 00:22:38.478 [2024-05-16 20:31:51.429793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:34560 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b10f900 len:0x10000 key:0x183600 00:22:38.478 [2024-05-16 20:31:51.429803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1d2f750 sqhd:2b00 p:0 m:0 dnr:0 00:22:38.478 [2024-05-16 20:31:51.429816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:34688 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b0ff880 len:0x10000 key:0x183600 00:22:38.478 [2024-05-16 20:31:51.429827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1d2f750 sqhd:2b00 p:0 m:0 dnr:0 00:22:38.478 [2024-05-16 20:31:51.429839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:34816 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b0ef800 len:0x10000 key:0x183600 00:22:38.478 [2024-05-16 20:31:51.429849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1d2f750 sqhd:2b00 p:0 m:0 dnr:0 00:22:38.478 [2024-05-16 20:31:51.429861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:34944 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b0df780 len:0x10000 key:0x183600 00:22:38.478 [2024-05-16 20:31:51.429874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1d2f750 sqhd:2b00 p:0 m:0 dnr:0 00:22:38.478 [2024-05-16 20:31:51.429886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:35072 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b0cf700 len:0x10000 key:0x183600 00:22:38.478 [2024-05-16 20:31:51.429896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1d2f750 sqhd:2b00 p:0 m:0 dnr:0 00:22:38.478 [2024-05-16 20:31:51.429909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:35200 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b0bf680 len:0x10000 key:0x183600 00:22:38.478 [2024-05-16 20:31:51.429920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1d2f750 sqhd:2b00 p:0 m:0 dnr:0 00:22:38.478 [2024-05-16 20:31:51.429932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:35328 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b0af600 len:0x10000 key:0x183600 00:22:38.478 [2024-05-16 20:31:51.429942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1d2f750 sqhd:2b00 p:0 m:0 dnr:0 00:22:38.478 [2024-05-16 20:31:51.429958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:35456 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b09f580 len:0x10000 key:0x183600 00:22:38.478 [2024-05-16 20:31:51.429968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1d2f750 sqhd:2b00 p:0 m:0 dnr:0 00:22:38.478 [2024-05-16 20:31:51.429981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:35584 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b08f500 len:0x10000 key:0x183600 00:22:38.478 [2024-05-16 20:31:51.429991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1d2f750 sqhd:2b00 p:0 m:0 dnr:0 00:22:38.478 [2024-05-16 20:31:51.430004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:35712 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b07f480 len:0x10000 key:0x183600 00:22:38.478 [2024-05-16 20:31:51.430014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1d2f750 sqhd:2b00 p:0 m:0 dnr:0 00:22:38.478 [2024-05-16 20:31:51.430027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:35840 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b06f400 len:0x10000 key:0x183600 00:22:38.478 [2024-05-16 20:31:51.430037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1d2f750 sqhd:2b00 p:0 m:0 dnr:0 00:22:38.478 [2024-05-16 20:31:51.430050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:35968 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b05f380 len:0x10000 key:0x183600 00:22:38.478 [2024-05-16 20:31:51.430060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1d2f750 sqhd:2b00 p:0 m:0 dnr:0 00:22:38.478 [2024-05-16 20:31:51.430073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:36096 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b04f300 len:0x10000 key:0x183600 00:22:38.478 [2024-05-16 20:31:51.430083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1d2f750 sqhd:2b00 p:0 m:0 dnr:0 00:22:38.478 [2024-05-16 20:31:51.430096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:36224 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b03f280 len:0x10000 key:0x183600 00:22:38.478 [2024-05-16 20:31:51.430106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1d2f750 sqhd:2b00 p:0 m:0 dnr:0 00:22:38.478 [2024-05-16 20:31:51.430119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:36352 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b02f200 len:0x10000 key:0x183600 00:22:38.478 [2024-05-16 20:31:51.430131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1d2f750 sqhd:2b00 p:0 m:0 dnr:0 00:22:38.478 [2024-05-16 20:31:51.430144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:36480 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b01f180 len:0x10000 key:0x183600 00:22:38.478 [2024-05-16 20:31:51.430154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1d2f750 sqhd:2b00 p:0 m:0 dnr:0 00:22:38.478 [2024-05-16 20:31:51.430167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:36608 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b00f100 len:0x10000 key:0x183600 00:22:38.478 [2024-05-16 20:31:51.430177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1d2f750 sqhd:2b00 p:0 m:0 dnr:0 00:22:38.478 [2024-05-16 20:31:51.430190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:36736 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b3f0000 len:0x10000 key:0x183900 00:22:38.478 [2024-05-16 20:31:51.430200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1d2f750 sqhd:2b00 p:0 m:0 dnr:0 00:22:38.478 [2024-05-16 20:31:51.430212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:36864 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b3dff80 len:0x10000 key:0x183900 00:22:38.478 [2024-05-16 20:31:51.430222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1d2f750 sqhd:2b00 p:0 m:0 dnr:0 00:22:38.478 [2024-05-16 20:31:51.430235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:36992 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b3cff00 len:0x10000 key:0x183900 00:22:38.478 [2024-05-16 20:31:51.430246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1d2f750 sqhd:2b00 p:0 m:0 dnr:0 00:22:38.478 [2024-05-16 20:31:51.430258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:37120 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b3bfe80 len:0x10000 key:0x183900 00:22:38.478 [2024-05-16 20:31:51.430268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1d2f750 sqhd:2b00 p:0 m:0 dnr:0 00:22:38.478 [2024-05-16 20:31:51.430282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:37248 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b3afe00 len:0x10000 key:0x183900 00:22:38.478 [2024-05-16 20:31:51.430292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1d2f750 sqhd:2b00 p:0 m:0 dnr:0 00:22:38.478 [2024-05-16 20:31:51.430305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:37376 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b39fd80 len:0x10000 key:0x183900 00:22:38.478 [2024-05-16 20:31:51.430315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1d2f750 sqhd:2b00 p:0 m:0 dnr:0 00:22:38.478 [2024-05-16 20:31:51.430328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:37504 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b38fd00 len:0x10000 key:0x183900 00:22:38.478 [2024-05-16 20:31:51.430338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1d2f750 sqhd:2b00 p:0 m:0 dnr:0 00:22:38.478 [2024-05-16 20:31:51.430351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:37632 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b37fc80 len:0x10000 key:0x183900 00:22:38.478 [2024-05-16 20:31:51.430361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1d2f750 sqhd:2b00 p:0 m:0 dnr:0 00:22:38.478 [2024-05-16 20:31:51.430374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:37760 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b36fc00 len:0x10000 key:0x183900 00:22:38.478 [2024-05-16 20:31:51.430386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1d2f750 sqhd:2b00 p:0 m:0 dnr:0 00:22:38.478 [2024-05-16 20:31:51.430399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:37888 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b35fb80 len:0x10000 key:0x183900 00:22:38.478 [2024-05-16 20:31:51.430409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1d2f750 sqhd:2b00 p:0 m:0 dnr:0 00:22:38.478 [2024-05-16 20:31:51.430430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:38016 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b34fb00 len:0x10000 key:0x183900 00:22:38.478 [2024-05-16 20:31:51.430442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1d2f750 sqhd:2b00 p:0 m:0 dnr:0 00:22:38.478 [2024-05-16 20:31:51.430455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:38144 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b33fa80 len:0x10000 key:0x183900 00:22:38.478 [2024-05-16 20:31:51.430466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1d2f750 sqhd:2b00 p:0 m:0 dnr:0 00:22:38.478 [2024-05-16 20:31:51.430479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:38272 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b32fa00 len:0x10000 key:0x183900 00:22:38.479 [2024-05-16 20:31:51.430489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1d2f750 sqhd:2b00 p:0 m:0 dnr:0 00:22:38.479 [2024-05-16 20:31:51.430502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:38400 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b31f980 len:0x10000 key:0x183900 00:22:38.479 [2024-05-16 20:31:51.430512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1d2f750 sqhd:2b00 p:0 m:0 dnr:0 00:22:38.479 [2024-05-16 20:31:51.430525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:38528 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b30f900 len:0x10000 key:0x183900 00:22:38.479 [2024-05-16 20:31:51.430536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1d2f750 sqhd:2b00 p:0 m:0 dnr:0 00:22:38.479 [2024-05-16 20:31:51.430549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:38656 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b2ff880 len:0x10000 key:0x183900 00:22:38.479 [2024-05-16 20:31:51.430560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1d2f750 sqhd:2b00 p:0 m:0 dnr:0 00:22:38.479 [2024-05-16 20:31:51.430573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:38784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b2ef800 len:0x10000 key:0x183900 00:22:38.479 [2024-05-16 20:31:51.430582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1d2f750 sqhd:2b00 p:0 m:0 dnr:0 00:22:38.479 [2024-05-16 20:31:51.430595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:38912 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b2df780 len:0x10000 key:0x183900 00:22:38.479 [2024-05-16 20:31:51.430606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1d2f750 sqhd:2b00 p:0 m:0 dnr:0 00:22:38.479 [2024-05-16 20:31:51.430619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:39040 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b2cf700 len:0x10000 key:0x183900 00:22:38.479 [2024-05-16 20:31:51.430629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1d2f750 sqhd:2b00 p:0 m:0 dnr:0 00:22:38.479 [2024-05-16 20:31:51.430641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:39168 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b2bf680 len:0x10000 key:0x183900 00:22:38.479 [2024-05-16 20:31:51.430652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1d2f750 sqhd:2b00 p:0 m:0 dnr:0 00:22:38.479 [2024-05-16 20:31:51.430670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:39296 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b2af600 len:0x10000 key:0x183900 00:22:38.479 [2024-05-16 20:31:51.430680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1d2f750 sqhd:2b00 p:0 m:0 dnr:0 00:22:38.479 [2024-05-16 20:31:51.430693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:39424 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b29f580 len:0x10000 key:0x183900 00:22:38.479 [2024-05-16 20:31:51.430703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1d2f750 sqhd:2b00 p:0 m:0 dnr:0 00:22:38.479 [2024-05-16 20:31:51.430717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:39552 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b28f500 len:0x10000 key:0x183900 00:22:38.479 [2024-05-16 20:31:51.430727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1d2f750 sqhd:2b00 p:0 m:0 dnr:0 00:22:38.479 [2024-05-16 20:31:51.430740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:39680 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b27f480 len:0x10000 key:0x183900 00:22:38.479 [2024-05-16 20:31:51.430750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1d2f750 sqhd:2b00 p:0 m:0 dnr:0 00:22:38.479 [2024-05-16 20:31:51.430763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:39808 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b26f400 len:0x10000 key:0x183900 00:22:38.479 [2024-05-16 20:31:51.430773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1d2f750 sqhd:2b00 p:0 m:0 dnr:0 00:22:38.479 [2024-05-16 20:31:51.430786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:39936 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b25f380 len:0x10000 key:0x183900 00:22:38.479 [2024-05-16 20:31:51.430796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1d2f750 sqhd:2b00 p:0 m:0 dnr:0 00:22:38.479 [2024-05-16 20:31:51.430809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:40064 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b24f300 len:0x10000 key:0x183900 00:22:38.479 [2024-05-16 20:31:51.430819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1d2f750 sqhd:2b00 p:0 m:0 dnr:0 00:22:38.479 [2024-05-16 20:31:51.430831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:40192 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b23f280 len:0x10000 key:0x183900 00:22:38.479 [2024-05-16 20:31:51.430842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1d2f750 sqhd:2b00 p:0 m:0 dnr:0 00:22:38.479 [2024-05-16 20:31:51.430855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:40320 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b22f200 len:0x10000 key:0x183900 00:22:38.479 [2024-05-16 20:31:51.430865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1d2f750 sqhd:2b00 p:0 m:0 dnr:0 00:22:38.479 [2024-05-16 20:31:51.430878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:40448 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b21f180 len:0x10000 key:0x183900 00:22:38.479 [2024-05-16 20:31:51.430888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1d2f750 sqhd:2b00 p:0 m:0 dnr:0 00:22:38.479 [2024-05-16 20:31:51.430901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:40576 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b20f100 len:0x10000 key:0x183900 00:22:38.479 [2024-05-16 20:31:51.430911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1d2f750 sqhd:2b00 p:0 m:0 dnr:0 00:22:38.479 [2024-05-16 20:31:51.430926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:40704 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b5f0000 len:0x10000 key:0x183300 00:22:38.479 [2024-05-16 20:31:51.430936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1d2f750 sqhd:2b00 p:0 m:0 dnr:0 00:22:38.479 [2024-05-16 20:31:51.430949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:40832 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ae0f700 len:0x10000 key:0x183c00 00:22:38.479 [2024-05-16 20:31:51.430960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1d2f750 sqhd:2b00 p:0 m:0 dnr:0 00:22:38.479 [2024-05-16 20:31:51.433324] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x20001b806440 was disconnected and freed. reset controller. 00:22:38.479 [2024-05-16 20:31:51.433350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:24576 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b4cfd00 len:0x10000 key:0x183300 00:22:38.479 [2024-05-16 20:31:51.433362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:414f440 sqhd:8eb0 p:0 m:0 dnr:0 00:22:38.479 [2024-05-16 20:31:51.433378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24704 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b4bfc80 len:0x10000 key:0x183300 00:22:38.479 [2024-05-16 20:31:51.433389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:414f440 sqhd:8eb0 p:0 m:0 dnr:0 00:22:38.479 [2024-05-16 20:31:51.433402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24832 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b4afc00 len:0x10000 key:0x183300 00:22:38.479 [2024-05-16 20:31:51.433413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:414f440 sqhd:8eb0 p:0 m:0 dnr:0 00:22:38.479 [2024-05-16 20:31:51.433432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24960 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b49fb80 len:0x10000 key:0x183300 00:22:38.479 [2024-05-16 20:31:51.433443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:414f440 sqhd:8eb0 p:0 m:0 dnr:0 00:22:38.479 [2024-05-16 20:31:51.433457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:25088 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b48fb00 len:0x10000 key:0x183300 00:22:38.479 [2024-05-16 20:31:51.433467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:414f440 sqhd:8eb0 p:0 m:0 dnr:0 00:22:38.479 [2024-05-16 20:31:51.433481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:25216 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b47fa80 len:0x10000 key:0x183300 00:22:38.479 [2024-05-16 20:31:51.433491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:414f440 sqhd:8eb0 p:0 m:0 dnr:0 00:22:38.479 [2024-05-16 20:31:51.433504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25344 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b46fa00 len:0x10000 key:0x183300 00:22:38.479 [2024-05-16 20:31:51.433515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:414f440 sqhd:8eb0 p:0 m:0 dnr:0 00:22:38.479 [2024-05-16 20:31:51.433527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25472 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b45f980 len:0x10000 key:0x183300 00:22:38.479 [2024-05-16 20:31:51.433537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:414f440 sqhd:8eb0 p:0 m:0 dnr:0 00:22:38.479 [2024-05-16 20:31:51.433551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:25600 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b44f900 len:0x10000 key:0x183300 00:22:38.479 [2024-05-16 20:31:51.433561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:414f440 sqhd:8eb0 p:0 m:0 dnr:0 00:22:38.479 [2024-05-16 20:31:51.433579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:25728 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b43f880 len:0x10000 key:0x183300 00:22:38.479 [2024-05-16 20:31:51.433589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:414f440 sqhd:8eb0 p:0 m:0 dnr:0 00:22:38.479 [2024-05-16 20:31:51.433602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25856 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b42f800 len:0x10000 key:0x183300 00:22:38.479 [2024-05-16 20:31:51.433613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:414f440 sqhd:8eb0 p:0 m:0 dnr:0 00:22:38.479 [2024-05-16 20:31:51.433626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25984 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b41f780 len:0x10000 key:0x183300 00:22:38.479 [2024-05-16 20:31:51.433636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:414f440 sqhd:8eb0 p:0 m:0 dnr:0 00:22:38.479 [2024-05-16 20:31:51.433649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:26112 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b40f700 len:0x10000 key:0x183300 00:22:38.479 [2024-05-16 20:31:51.433659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:414f440 sqhd:8eb0 p:0 m:0 dnr:0 00:22:38.479 [2024-05-16 20:31:51.433674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:26240 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b7f0000 len:0x10000 key:0x183e00 00:22:38.479 [2024-05-16 20:31:51.433684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:414f440 sqhd:8eb0 p:0 m:0 dnr:0 00:22:38.479 [2024-05-16 20:31:51.433696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:26368 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b7dff80 len:0x10000 key:0x183e00 00:22:38.479 [2024-05-16 20:31:51.433707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:414f440 sqhd:8eb0 p:0 m:0 dnr:0 00:22:38.480 [2024-05-16 20:31:51.433720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:26496 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b7cff00 len:0x10000 key:0x183e00 00:22:38.480 [2024-05-16 20:31:51.433730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:414f440 sqhd:8eb0 p:0 m:0 dnr:0 00:22:38.480 [2024-05-16 20:31:51.433743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:26624 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b7bfe80 len:0x10000 key:0x183e00 00:22:38.480 [2024-05-16 20:31:51.433753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:414f440 sqhd:8eb0 p:0 m:0 dnr:0 00:22:38.480 [2024-05-16 20:31:51.433766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:26752 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b7afe00 len:0x10000 key:0x183e00 00:22:38.480 [2024-05-16 20:31:51.433776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:414f440 sqhd:8eb0 p:0 m:0 dnr:0 00:22:38.480 [2024-05-16 20:31:51.433789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26880 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b79fd80 len:0x10000 key:0x183e00 00:22:38.480 [2024-05-16 20:31:51.433799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:414f440 sqhd:8eb0 p:0 m:0 dnr:0 00:22:38.480 [2024-05-16 20:31:51.433813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:27008 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b78fd00 len:0x10000 key:0x183e00 00:22:38.480 [2024-05-16 20:31:51.433823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:414f440 sqhd:8eb0 p:0 m:0 dnr:0 00:22:38.480 [2024-05-16 20:31:51.433838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:27136 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b77fc80 len:0x10000 key:0x183e00 00:22:38.480 [2024-05-16 20:31:51.433848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:414f440 sqhd:8eb0 p:0 m:0 dnr:0 00:22:38.480 [2024-05-16 20:31:51.433861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:27264 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b76fc00 len:0x10000 key:0x183e00 00:22:38.480 [2024-05-16 20:31:51.433871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:414f440 sqhd:8eb0 p:0 m:0 dnr:0 00:22:38.480 [2024-05-16 20:31:51.433888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:27392 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b75fb80 len:0x10000 key:0x183e00 00:22:38.480 [2024-05-16 20:31:51.433899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:414f440 sqhd:8eb0 p:0 m:0 dnr:0 00:22:38.480 [2024-05-16 20:31:51.433911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:27520 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b74fb00 len:0x10000 key:0x183e00 00:22:38.480 [2024-05-16 20:31:51.433921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:414f440 sqhd:8eb0 p:0 m:0 dnr:0 00:22:38.480 [2024-05-16 20:31:51.433934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:27648 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b73fa80 len:0x10000 key:0x183e00 00:22:38.480 [2024-05-16 20:31:51.433945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:414f440 sqhd:8eb0 p:0 m:0 dnr:0 00:22:38.480 [2024-05-16 20:31:51.433957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27776 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b72fa00 len:0x10000 key:0x183e00 00:22:38.480 [2024-05-16 20:31:51.433967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:414f440 sqhd:8eb0 p:0 m:0 dnr:0 00:22:38.480 [2024-05-16 20:31:51.433980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27904 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b71f980 len:0x10000 key:0x183e00 00:22:38.480 [2024-05-16 20:31:51.433991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:414f440 sqhd:8eb0 p:0 m:0 dnr:0 00:22:38.480 [2024-05-16 20:31:51.434003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:28032 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b70f900 len:0x10000 key:0x183e00 00:22:38.480 [2024-05-16 20:31:51.434013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:414f440 sqhd:8eb0 p:0 m:0 dnr:0 00:22:38.480 [2024-05-16 20:31:51.434026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:28160 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b6ff880 len:0x10000 key:0x183e00 00:22:38.480 [2024-05-16 20:31:51.434036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:414f440 sqhd:8eb0 p:0 m:0 dnr:0 00:22:38.480 [2024-05-16 20:31:51.434050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:28288 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b6ef800 len:0x10000 key:0x183e00 00:22:38.480 [2024-05-16 20:31:51.434060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:414f440 sqhd:8eb0 p:0 m:0 dnr:0 00:22:38.480 [2024-05-16 20:31:51.434072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:28416 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b6df780 len:0x10000 key:0x183e00 00:22:38.480 [2024-05-16 20:31:51.434083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:414f440 sqhd:8eb0 p:0 m:0 dnr:0 00:22:38.480 [2024-05-16 20:31:51.434096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:28544 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b6cf700 len:0x10000 key:0x183e00 00:22:38.480 [2024-05-16 20:31:51.434108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:414f440 sqhd:8eb0 p:0 m:0 dnr:0 00:22:38.480 [2024-05-16 20:31:51.434120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:28672 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b6bf680 len:0x10000 key:0x183e00 00:22:38.480 [2024-05-16 20:31:51.434131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:414f440 sqhd:8eb0 p:0 m:0 dnr:0 00:22:38.480 [2024-05-16 20:31:51.434144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28800 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b6af600 len:0x10000 key:0x183e00 00:22:38.480 [2024-05-16 20:31:51.434154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:414f440 sqhd:8eb0 p:0 m:0 dnr:0 00:22:38.480 [2024-05-16 20:31:51.434166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28928 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b69f580 len:0x10000 key:0x183e00 00:22:38.480 [2024-05-16 20:31:51.434177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:414f440 sqhd:8eb0 p:0 m:0 dnr:0 00:22:38.480 [2024-05-16 20:31:51.434189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:29056 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b68f500 len:0x10000 key:0x183e00 00:22:38.480 [2024-05-16 20:31:51.434200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:414f440 sqhd:8eb0 p:0 m:0 dnr:0 00:22:38.480 [2024-05-16 20:31:51.434212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:29184 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b67f480 len:0x10000 key:0x183e00 00:22:38.480 [2024-05-16 20:31:51.434222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:414f440 sqhd:8eb0 p:0 m:0 dnr:0 00:22:38.480 [2024-05-16 20:31:51.434235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:29312 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b66f400 len:0x10000 key:0x183e00 00:22:38.480 [2024-05-16 20:31:51.434245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:414f440 sqhd:8eb0 p:0 m:0 dnr:0 00:22:38.480 [2024-05-16 20:31:51.434257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:29440 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b65f380 len:0x10000 key:0x183e00 00:22:38.480 [2024-05-16 20:31:51.434268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:414f440 sqhd:8eb0 p:0 m:0 dnr:0 00:22:38.480 [2024-05-16 20:31:51.434281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:29568 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b64f300 len:0x10000 key:0x183e00 00:22:38.480 [2024-05-16 20:31:51.434292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:414f440 sqhd:8eb0 p:0 m:0 dnr:0 00:22:38.480 [2024-05-16 20:31:51.434304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:29696 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b63f280 len:0x10000 key:0x183e00 00:22:38.480 [2024-05-16 20:31:51.434315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:414f440 sqhd:8eb0 p:0 m:0 dnr:0 00:22:38.480 [2024-05-16 20:31:51.434328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29824 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b62f200 len:0x10000 key:0x183e00 00:22:38.480 [2024-05-16 20:31:51.434338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:414f440 sqhd:8eb0 p:0 m:0 dnr:0 00:22:38.480 [2024-05-16 20:31:51.434351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29952 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b61f180 len:0x10000 key:0x183e00 00:22:38.480 [2024-05-16 20:31:51.434363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:414f440 sqhd:8eb0 p:0 m:0 dnr:0 00:22:38.480 [2024-05-16 20:31:51.434376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:30080 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b60f100 len:0x10000 key:0x183e00 00:22:38.480 [2024-05-16 20:31:51.434386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:414f440 sqhd:8eb0 p:0 m:0 dnr:0 00:22:38.480 [2024-05-16 20:31:51.434399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:30208 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b9f0000 len:0x10000 key:0x183500 00:22:38.480 [2024-05-16 20:31:51.434409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:414f440 sqhd:8eb0 p:0 m:0 dnr:0 00:22:38.480 [2024-05-16 20:31:51.434428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:30336 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b9dff80 len:0x10000 key:0x183500 00:22:38.480 [2024-05-16 20:31:51.434439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:414f440 sqhd:8eb0 p:0 m:0 dnr:0 00:22:38.480 [2024-05-16 20:31:51.434451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:30464 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b9cff00 len:0x10000 key:0x183500 00:22:38.480 [2024-05-16 20:31:51.434462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:414f440 sqhd:8eb0 p:0 m:0 dnr:0 00:22:38.480 [2024-05-16 20:31:51.434474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:30592 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b9bfe80 len:0x10000 key:0x183500 00:22:38.480 [2024-05-16 20:31:51.434485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:414f440 sqhd:8eb0 p:0 m:0 dnr:0 00:22:38.480 [2024-05-16 20:31:51.434498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:30720 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b9afe00 len:0x10000 key:0x183500 00:22:38.480 [2024-05-16 20:31:51.434508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:414f440 sqhd:8eb0 p:0 m:0 dnr:0 00:22:38.480 [2024-05-16 20:31:51.434521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30848 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b99fd80 len:0x10000 key:0x183500 00:22:38.480 [2024-05-16 20:31:51.434531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:414f440 sqhd:8eb0 p:0 m:0 dnr:0 00:22:38.480 [2024-05-16 20:31:51.434543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30976 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b98fd00 len:0x10000 key:0x183500 00:22:38.480 [2024-05-16 20:31:51.434554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:414f440 sqhd:8eb0 p:0 m:0 dnr:0 00:22:38.480 [2024-05-16 20:31:51.434567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:31104 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b97fc80 len:0x10000 key:0x183500 00:22:38.480 [2024-05-16 20:31:51.434579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:414f440 sqhd:8eb0 p:0 m:0 dnr:0 00:22:38.481 [2024-05-16 20:31:51.434592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:31232 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b96fc00 len:0x10000 key:0x183500 00:22:38.481 [2024-05-16 20:31:51.434603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:414f440 sqhd:8eb0 p:0 m:0 dnr:0 00:22:38.481 [2024-05-16 20:31:51.434616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:31360 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b95fb80 len:0x10000 key:0x183500 00:22:38.481 [2024-05-16 20:31:51.434626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:414f440 sqhd:8eb0 p:0 m:0 dnr:0 00:22:38.481 [2024-05-16 20:31:51.434642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:31488 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b94fb00 len:0x10000 key:0x183500 00:22:38.481 [2024-05-16 20:31:51.434653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:414f440 sqhd:8eb0 p:0 m:0 dnr:0 00:22:38.481 [2024-05-16 20:31:51.434666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:31616 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b93fa80 len:0x10000 key:0x183500 00:22:38.481 [2024-05-16 20:31:51.434676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:414f440 sqhd:8eb0 p:0 m:0 dnr:0 00:22:38.481 [2024-05-16 20:31:51.434689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:31744 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b92fa00 len:0x10000 key:0x183500 00:22:38.481 [2024-05-16 20:31:51.434700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:414f440 sqhd:8eb0 p:0 m:0 dnr:0 00:22:38.481 [2024-05-16 20:31:51.434713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31872 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b91f980 len:0x10000 key:0x183500 00:22:38.481 [2024-05-16 20:31:51.434723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:414f440 sqhd:8eb0 p:0 m:0 dnr:0 00:22:38.481 [2024-05-16 20:31:51.434736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:32000 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b90f900 len:0x10000 key:0x183500 00:22:38.481 [2024-05-16 20:31:51.434746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:414f440 sqhd:8eb0 p:0 m:0 dnr:0 00:22:38.481 [2024-05-16 20:31:51.434759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:32128 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b8ff880 len:0x10000 key:0x183500 00:22:38.481 [2024-05-16 20:31:51.434769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:414f440 sqhd:8eb0 p:0 m:0 dnr:0 00:22:38.481 [2024-05-16 20:31:51.434782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:32256 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b8ef800 len:0x10000 key:0x183500 00:22:38.481 [2024-05-16 20:31:51.434793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:414f440 sqhd:8eb0 p:0 m:0 dnr:0 00:22:38.481 [2024-05-16 20:31:51.434806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:32384 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b8df780 len:0x10000 key:0x183500 00:22:38.481 [2024-05-16 20:31:51.434816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:414f440 sqhd:8eb0 p:0 m:0 dnr:0 00:22:38.481 [2024-05-16 20:31:51.434829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:32512 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b8cf700 len:0x10000 key:0x183500 00:22:38.481 [2024-05-16 20:31:51.434840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:414f440 sqhd:8eb0 p:0 m:0 dnr:0 00:22:38.481 [2024-05-16 20:31:51.434853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:32640 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b4dfd80 len:0x10000 key:0x183300 00:22:38.481 [2024-05-16 20:31:51.434863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:414f440 sqhd:8eb0 p:0 m:0 dnr:0 00:22:38.481 [2024-05-16 20:31:51.437880] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x20001b8061c0 was disconnected and freed. reset controller. 00:22:38.481 [2024-05-16 20:31:51.437974] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:38.481 [2024-05-16 20:31:51.437988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.481 [2024-05-16 20:31:51.438005] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:38.481 [2024-05-16 20:31:51.438015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.481 [2024-05-16 20:31:51.438026] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:38.481 [2024-05-16 20:31:51.438037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.481 [2024-05-16 20:31:51.438049] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:22:38.481 [2024-05-16 20:31:51.438059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.481 [2024-05-16 20:31:51.440485] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:22:38.481 [2024-05-16 20:31:51.440503] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:22:38.481 [2024-05-16 20:31:51.440513] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:38.481 [2024-05-16 20:31:51.440531] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:38.481 [2024-05-16 20:31:51.440542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:60698 cdw0:0 sqhd:6a00 p:0 m:0 dnr:0 00:22:38.481 [2024-05-16 20:31:51.440553] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:38.481 [2024-05-16 20:31:51.440563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:60698 cdw0:0 sqhd:6a00 p:0 m:0 dnr:0 00:22:38.481 [2024-05-16 20:31:51.440574] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:38.481 [2024-05-16 20:31:51.440584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:60698 cdw0:0 sqhd:6a00 p:0 m:0 dnr:0 00:22:38.481 [2024-05-16 20:31:51.440595] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:22:38.481 [2024-05-16 20:31:51.440605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:60698 cdw0:0 sqhd:6a00 p:0 m:0 dnr:0 00:22:38.481 [2024-05-16 20:31:51.442338] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:22:38.481 [2024-05-16 20:31:51.442353] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:22:38.481 [2024-05-16 20:31:51.442362] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:38.481 [2024-05-16 20:31:51.442379] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:38.481 [2024-05-16 20:31:51.442390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:60698 cdw0:0 sqhd:6a00 p:0 m:0 dnr:0 00:22:38.481 [2024-05-16 20:31:51.442401] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:38.481 [2024-05-16 20:31:51.442411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:60698 cdw0:0 sqhd:6a00 p:0 m:0 dnr:0 00:22:38.481 [2024-05-16 20:31:51.442429] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:38.481 [2024-05-16 20:31:51.442440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:60698 cdw0:0 sqhd:6a00 p:0 m:0 dnr:0 00:22:38.481 [2024-05-16 20:31:51.442454] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:22:38.481 [2024-05-16 20:31:51.442464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:60698 cdw0:0 sqhd:6a00 p:0 m:0 dnr:0 00:22:38.481 [2024-05-16 20:31:51.444436] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:22:38.481 [2024-05-16 20:31:51.444452] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:22:38.481 [2024-05-16 20:31:51.444461] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:38.481 [2024-05-16 20:31:51.444478] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:38.481 [2024-05-16 20:31:51.444489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:60698 cdw0:0 sqhd:6a00 p:0 m:0 dnr:0 00:22:38.481 [2024-05-16 20:31:51.444500] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:38.481 [2024-05-16 20:31:51.444510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:60698 cdw0:0 sqhd:6a00 p:0 m:0 dnr:0 00:22:38.482 [2024-05-16 20:31:51.444521] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:38.482 [2024-05-16 20:31:51.444531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:60698 cdw0:0 sqhd:6a00 p:0 m:0 dnr:0 00:22:38.482 [2024-05-16 20:31:51.444542] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:22:38.482 [2024-05-16 20:31:51.444552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:60698 cdw0:0 sqhd:6a00 p:0 m:0 dnr:0 00:22:38.482 [2024-05-16 20:31:51.446615] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:22:38.482 [2024-05-16 20:31:51.446631] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:22:38.482 [2024-05-16 20:31:51.446640] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:38.482 [2024-05-16 20:31:51.446657] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:38.482 [2024-05-16 20:31:51.446667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:60698 cdw0:0 sqhd:6a00 p:0 m:0 dnr:0 00:22:38.482 [2024-05-16 20:31:51.446678] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:38.482 [2024-05-16 20:31:51.446688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:60698 cdw0:0 sqhd:6a00 p:0 m:0 dnr:0 00:22:38.482 [2024-05-16 20:31:51.446699] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:38.482 [2024-05-16 20:31:51.446710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:60698 cdw0:0 sqhd:6a00 p:0 m:0 dnr:0 00:22:38.482 [2024-05-16 20:31:51.446720] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:22:38.482 [2024-05-16 20:31:51.446730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:60698 cdw0:0 sqhd:6a00 p:0 m:0 dnr:0 00:22:38.768 [2024-05-16 20:31:51.448462] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:22:38.768 [2024-05-16 20:31:51.448480] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:22:38.768 [2024-05-16 20:31:51.448493] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:38.768 [2024-05-16 20:31:51.448509] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:38.768 [2024-05-16 20:31:51.448519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:60698 cdw0:0 sqhd:6a00 p:0 m:0 dnr:0 00:22:38.768 [2024-05-16 20:31:51.448530] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:38.768 [2024-05-16 20:31:51.448540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:60698 cdw0:0 sqhd:6a00 p:0 m:0 dnr:0 00:22:38.768 [2024-05-16 20:31:51.448551] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:38.768 [2024-05-16 20:31:51.448561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:60698 cdw0:0 sqhd:6a00 p:0 m:0 dnr:0 00:22:38.768 [2024-05-16 20:31:51.448572] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:22:38.768 [2024-05-16 20:31:51.448581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:60698 cdw0:0 sqhd:6a00 p:0 m:0 dnr:0 00:22:38.768 [2024-05-16 20:31:51.450452] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:22:38.768 [2024-05-16 20:31:51.450468] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:38.768 [2024-05-16 20:31:51.450478] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:38.768 [2024-05-16 20:31:51.450494] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:38.768 [2024-05-16 20:31:51.450504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:60698 cdw0:0 sqhd:6a00 p:0 m:0 dnr:0 00:22:38.768 [2024-05-16 20:31:51.450515] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:38.768 [2024-05-16 20:31:51.450524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:60698 cdw0:0 sqhd:6a00 p:0 m:0 dnr:0 00:22:38.768 [2024-05-16 20:31:51.450535] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:38.768 [2024-05-16 20:31:51.450545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:60698 cdw0:0 sqhd:6a00 p:0 m:0 dnr:0 00:22:38.768 [2024-05-16 20:31:51.450556] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:22:38.768 [2024-05-16 20:31:51.450566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:60698 cdw0:0 sqhd:6a00 p:0 m:0 dnr:0 00:22:38.768 [2024-05-16 20:31:51.452478] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:22:38.768 [2024-05-16 20:31:51.452493] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:22:38.768 [2024-05-16 20:31:51.452502] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:38.768 [2024-05-16 20:31:51.452519] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:38.769 [2024-05-16 20:31:51.452530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:60698 cdw0:0 sqhd:6a00 p:0 m:0 dnr:0 00:22:38.769 [2024-05-16 20:31:51.452541] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:38.769 [2024-05-16 20:31:51.452554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:60698 cdw0:0 sqhd:6a00 p:0 m:0 dnr:0 00:22:38.769 [2024-05-16 20:31:51.452565] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:38.769 [2024-05-16 20:31:51.452575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:60698 cdw0:0 sqhd:6a00 p:0 m:0 dnr:0 00:22:38.769 [2024-05-16 20:31:51.452585] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:22:38.769 [2024-05-16 20:31:51.452596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:60698 cdw0:0 sqhd:6a00 p:0 m:0 dnr:0 00:22:38.769 [2024-05-16 20:31:51.454555] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:22:38.769 [2024-05-16 20:31:51.454570] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:22:38.769 [2024-05-16 20:31:51.454579] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:38.769 [2024-05-16 20:31:51.454597] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:38.769 [2024-05-16 20:31:51.454607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:60698 cdw0:0 sqhd:6a00 p:0 m:0 dnr:0 00:22:38.769 [2024-05-16 20:31:51.454618] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:38.769 [2024-05-16 20:31:51.454628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:60698 cdw0:0 sqhd:6a00 p:0 m:0 dnr:0 00:22:38.769 [2024-05-16 20:31:51.454640] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:38.769 [2024-05-16 20:31:51.454650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:60698 cdw0:0 sqhd:6a00 p:0 m:0 dnr:0 00:22:38.769 [2024-05-16 20:31:51.454660] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:22:38.769 [2024-05-16 20:31:51.454670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:60698 cdw0:0 sqhd:6a00 p:0 m:0 dnr:0 00:22:38.769 [2024-05-16 20:31:51.456434] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:22:38.769 [2024-05-16 20:31:51.456450] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:22:38.769 [2024-05-16 20:31:51.456458] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:38.769 [2024-05-16 20:31:51.456474] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:38.769 [2024-05-16 20:31:51.456485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:60698 cdw0:0 sqhd:6a00 p:0 m:0 dnr:0 00:22:38.769 [2024-05-16 20:31:51.456495] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:38.769 [2024-05-16 20:31:51.456505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:60698 cdw0:0 sqhd:6a00 p:0 m:0 dnr:0 00:22:38.769 [2024-05-16 20:31:51.456516] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:38.769 [2024-05-16 20:31:51.456526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:60698 cdw0:0 sqhd:6a00 p:0 m:0 dnr:0 00:22:38.769 [2024-05-16 20:31:51.456538] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:22:38.769 [2024-05-16 20:31:51.456551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:60698 cdw0:0 sqhd:6a00 p:0 m:0 dnr:0 00:22:38.769 [2024-05-16 20:31:51.477555] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:22:38.769 [2024-05-16 20:31:51.477572] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:22:38.769 [2024-05-16 20:31:51.477579] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:38.769 [2024-05-16 20:31:51.480219] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:38.769 [2024-05-16 20:31:51.480239] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:22:38.769 [2024-05-16 20:31:51.480247] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:22:38.769 [2024-05-16 20:31:51.480254] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:22:38.769 [2024-05-16 20:31:51.480261] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:22:38.769 [2024-05-16 20:31:51.480268] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:22:38.769 [2024-05-16 20:31:51.480323] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:38.769 [2024-05-16 20:31:51.480335] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:38.769 [2024-05-16 20:31:51.480344] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:38.769 [2024-05-16 20:31:51.480352] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:38.769 [2024-05-16 20:31:51.481246] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:22:38.769 [2024-05-16 20:31:51.481259] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:22:38.769 [2024-05-16 20:31:51.481271] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:22:38.769 [2024-05-16 20:31:51.481278] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:22:38.769 task offset: 40960 on job bdev=Nvme6n1 fails 00:22:38.769 00:22:38.769 Latency(us) 00:22:38.769 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:38.769 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:38.769 Job: Nvme1n1 ended in about 1.90 seconds with error 00:22:38.769 Verification LBA range: start 0x0 length 0x400 00:22:38.769 Nvme1n1 : 1.90 134.80 8.43 33.70 0.00 376242.18 45188.63 1062557.01 00:22:38.769 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:38.769 Job: Nvme2n1 ended in about 1.90 seconds with error 00:22:38.769 Verification LBA range: start 0x0 length 0x400 00:22:38.769 Nvme2n1 : 1.90 134.75 8.42 33.69 0.00 372988.78 46187.28 1062557.01 00:22:38.769 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:38.769 Job: Nvme3n1 ended in about 1.90 seconds with error 00:22:38.769 Verification LBA range: start 0x0 length 0x400 00:22:38.769 Nvme3n1 : 1.90 151.54 9.47 33.68 0.00 336349.38 6335.15 1062557.01 00:22:38.769 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:38.769 Job: Nvme4n1 ended in about 1.90 seconds with error 00:22:38.769 Verification LBA range: start 0x0 length 0x400 00:22:38.769 Nvme4n1 : 1.90 151.48 9.47 33.66 0.00 333639.24 14542.75 1062557.01 00:22:38.769 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:38.769 Job: Nvme5n1 ended in about 1.90 seconds with error 00:22:38.769 Verification LBA range: start 0x0 length 0x400 00:22:38.769 Nvme5n1 : 1.90 143.01 8.94 33.65 0.00 346598.50 18474.91 1062557.01 00:22:38.769 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:38.769 Job: Nvme6n1 ended in about 1.90 seconds with error 00:22:38.769 Verification LBA range: start 0x0 length 0x400 00:22:38.769 Nvme6n1 : 1.90 150.32 9.39 33.64 0.00 329585.63 23967.45 1054567.86 00:22:38.769 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:38.769 Job: Nvme7n1 ended in about 1.90 seconds with error 00:22:38.769 Verification LBA range: start 0x0 length 0x400 00:22:38.769 Nvme7n1 : 1.90 142.91 8.93 33.62 0.00 340370.73 33204.91 1054567.86 00:22:38.769 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:38.769 Job: Nvme8n1 ended in about 1.89 seconds with error 00:22:38.769 Verification LBA range: start 0x0 length 0x400 00:22:38.769 Nvme8n1 : 1.89 146.99 9.19 33.84 0.00 326894.52 37449.14 1070546.16 00:22:38.769 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:38.769 Job: Nvme9n1 ended in about 1.89 seconds with error 00:22:38.769 Verification LBA range: start 0x0 length 0x400 00:22:38.769 Nvme9n1 : 1.89 135.49 8.47 33.87 0.00 350787.68 61416.59 1110491.92 00:22:38.769 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:38.769 Job: Nvme10n1 ended in about 1.89 seconds with error 00:22:38.769 Verification LBA range: start 0x0 length 0x400 00:22:38.769 Nvme10n1 : 1.89 101.57 6.35 33.86 0.00 434634.36 61915.92 1094513.62 00:22:38.769 =================================================================================================================== 00:22:38.769 Total : 1392.85 87.05 337.20 0.00 352339.50 6335.15 1110491.92 00:22:38.769 [2024-05-16 20:31:51.513939] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:22:38.769 [2024-05-16 20:31:51.519293] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:22:38.769 [2024-05-16 20:31:51.519341] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:22:38.769 [2024-05-16 20:31:51.519361] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192c5040 00:22:38.769 [2024-05-16 20:31:51.519468] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:22:38.769 [2024-05-16 20:31:51.519495] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:22:38.769 [2024-05-16 20:31:51.519512] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192c6340 00:22:38.769 [2024-05-16 20:31:51.519609] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:22:38.769 [2024-05-16 20:31:51.519634] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:22:38.769 [2024-05-16 20:31:51.519651] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192d2900 00:22:38.769 [2024-05-16 20:31:51.519755] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:22:38.769 [2024-05-16 20:31:51.519780] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:22:38.769 [2024-05-16 20:31:51.519796] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192d9c80 00:22:38.769 [2024-05-16 20:31:51.519916] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:22:38.770 [2024-05-16 20:31:51.519940] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:22:38.770 [2024-05-16 20:31:51.519957] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192e5300 00:22:38.770 [2024-05-16 20:31:51.520060] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:22:38.770 [2024-05-16 20:31:51.520092] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:22:38.770 [2024-05-16 20:31:51.520109] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed040 00:22:38.770 [2024-05-16 20:31:51.521285] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:22:38.770 [2024-05-16 20:31:51.521320] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:22:38.770 [2024-05-16 20:31:51.521328] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192a8500 00:22:38.770 [2024-05-16 20:31:51.521404] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:22:38.770 [2024-05-16 20:31:51.521416] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:22:38.770 [2024-05-16 20:31:51.521429] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192bf1c0 00:22:38.770 [2024-05-16 20:31:51.521517] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:22:38.770 [2024-05-16 20:31:51.521529] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:22:38.770 [2024-05-16 20:31:51.521536] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20001928e080 00:22:38.770 [2024-05-16 20:31:51.521635] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:22:38.770 [2024-05-16 20:31:51.521647] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:22:38.770 [2024-05-16 20:31:51.521655] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20001929b1c0 00:22:39.029 20:31:51 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # kill -9 3130487 00:22:39.029 20:31:51 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@144 -- # stoptarget 00:22:39.029 20:31:51 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:22:39.029 20:31:51 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:22:39.029 20:31:51 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:39.029 20:31:51 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@45 -- # nvmftestfini 00:22:39.029 20:31:51 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:39.029 20:31:51 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # sync 00:22:39.029 20:31:51 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:22:39.029 20:31:51 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:22:39.029 20:31:51 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@120 -- # set +e 00:22:39.029 20:31:51 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:39.029 20:31:51 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:22:39.029 rmmod nvme_rdma 00:22:39.029 rmmod nvme_fabrics 00:22:39.029 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 121: 3130487 Killed $rootdir/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") -q 64 -o 65536 -w verify -t 10 00:22:39.029 20:31:51 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:39.029 20:31:51 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set -e 00:22:39.029 20:31:51 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # return 0 00:22:39.029 20:31:51 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:22:39.029 20:31:51 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:39.029 20:31:51 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:22:39.029 00:22:39.029 real 0m5.116s 00:22:39.029 user 0m17.591s 00:22:39.029 sys 0m1.084s 00:22:39.029 20:31:51 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:22:39.029 20:31:51 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:39.029 ************************************ 00:22:39.029 END TEST nvmf_shutdown_tc3 00:22:39.029 ************************************ 00:22:39.029 20:31:51 nvmf_rdma.nvmf_shutdown -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:22:39.029 00:22:39.029 real 0m23.626s 00:22:39.029 user 1m10.553s 00:22:39.029 sys 0m7.796s 00:22:39.029 20:31:51 nvmf_rdma.nvmf_shutdown -- common/autotest_common.sh@1122 -- # xtrace_disable 00:22:39.029 20:31:51 nvmf_rdma.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:39.029 ************************************ 00:22:39.029 END TEST nvmf_shutdown 00:22:39.029 ************************************ 00:22:39.029 20:31:51 nvmf_rdma -- nvmf/nvmf.sh@85 -- # timing_exit target 00:22:39.029 20:31:51 nvmf_rdma -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:39.029 20:31:51 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:22:39.029 20:31:52 nvmf_rdma -- nvmf/nvmf.sh@87 -- # timing_enter host 00:22:39.029 20:31:52 nvmf_rdma -- common/autotest_common.sh@720 -- # xtrace_disable 00:22:39.029 20:31:52 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:22:39.029 20:31:52 nvmf_rdma -- nvmf/nvmf.sh@89 -- # [[ 0 -eq 0 ]] 00:22:39.029 20:31:52 nvmf_rdma -- nvmf/nvmf.sh@90 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=rdma 00:22:39.029 20:31:52 nvmf_rdma -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:22:39.029 20:31:52 nvmf_rdma -- common/autotest_common.sh@1103 -- # xtrace_disable 00:22:39.029 20:31:52 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:22:39.289 ************************************ 00:22:39.289 START TEST nvmf_multicontroller 00:22:39.289 ************************************ 00:22:39.289 20:31:52 nvmf_rdma.nvmf_multicontroller -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=rdma 00:22:39.289 * Looking for test storage... 00:22:39.289 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:22:39.289 20:31:52 nvmf_rdma.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:22:39.289 20:31:52 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:22:39.289 20:31:52 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:39.289 20:31:52 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:39.289 20:31:52 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:39.289 20:31:52 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:39.289 20:31:52 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:39.289 20:31:52 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:39.289 20:31:52 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:39.289 20:31:52 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:39.289 20:31:52 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:39.289 20:31:52 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:39.289 20:31:52 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:22:39.289 20:31:52 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:22:39.289 20:31:52 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:39.289 20:31:52 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:39.289 20:31:52 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:39.289 20:31:52 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:39.289 20:31:52 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:22:39.289 20:31:52 nvmf_rdma.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:39.289 20:31:52 nvmf_rdma.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:39.289 20:31:52 nvmf_rdma.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:39.289 20:31:52 nvmf_rdma.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:39.289 20:31:52 nvmf_rdma.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:39.289 20:31:52 nvmf_rdma.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:39.289 20:31:52 nvmf_rdma.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:22:39.289 20:31:52 nvmf_rdma.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:39.289 20:31:52 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:22:39.289 20:31:52 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:39.289 20:31:52 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:39.289 20:31:52 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:39.289 20:31:52 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:39.289 20:31:52 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:39.289 20:31:52 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:39.289 20:31:52 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:39.289 20:31:52 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:39.289 20:31:52 nvmf_rdma.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:39.289 20:31:52 nvmf_rdma.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:39.289 20:31:52 nvmf_rdma.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:22:39.289 20:31:52 nvmf_rdma.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:22:39.289 20:31:52 nvmf_rdma.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:39.289 20:31:52 nvmf_rdma.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' rdma == rdma ']' 00:22:39.289 20:31:52 nvmf_rdma.nvmf_multicontroller -- host/multicontroller.sh@19 -- # echo 'Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target.' 00:22:39.289 Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target. 00:22:39.289 20:31:52 nvmf_rdma.nvmf_multicontroller -- host/multicontroller.sh@20 -- # exit 0 00:22:39.289 00:22:39.289 real 0m0.117s 00:22:39.289 user 0m0.052s 00:22:39.289 sys 0m0.073s 00:22:39.289 20:31:52 nvmf_rdma.nvmf_multicontroller -- common/autotest_common.sh@1122 -- # xtrace_disable 00:22:39.289 20:31:52 nvmf_rdma.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:39.289 ************************************ 00:22:39.289 END TEST nvmf_multicontroller 00:22:39.289 ************************************ 00:22:39.289 20:31:52 nvmf_rdma -- nvmf/nvmf.sh@91 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=rdma 00:22:39.289 20:31:52 nvmf_rdma -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:22:39.289 20:31:52 nvmf_rdma -- common/autotest_common.sh@1103 -- # xtrace_disable 00:22:39.289 20:31:52 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:22:39.289 ************************************ 00:22:39.289 START TEST nvmf_aer 00:22:39.289 ************************************ 00:22:39.289 20:31:52 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=rdma 00:22:39.548 * Looking for test storage... 00:22:39.548 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:22:39.548 20:31:52 nvmf_rdma.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:22:39.548 20:31:52 nvmf_rdma.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:22:39.548 20:31:52 nvmf_rdma.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:39.548 20:31:52 nvmf_rdma.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:39.548 20:31:52 nvmf_rdma.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:39.548 20:31:52 nvmf_rdma.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:39.548 20:31:52 nvmf_rdma.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:39.548 20:31:52 nvmf_rdma.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:39.548 20:31:52 nvmf_rdma.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:39.548 20:31:52 nvmf_rdma.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:39.548 20:31:52 nvmf_rdma.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:39.548 20:31:52 nvmf_rdma.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:39.548 20:31:52 nvmf_rdma.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:22:39.548 20:31:52 nvmf_rdma.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:22:39.548 20:31:52 nvmf_rdma.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:39.548 20:31:52 nvmf_rdma.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:39.548 20:31:52 nvmf_rdma.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:39.548 20:31:52 nvmf_rdma.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:39.548 20:31:52 nvmf_rdma.nvmf_aer -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:22:39.548 20:31:52 nvmf_rdma.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:39.548 20:31:52 nvmf_rdma.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:39.548 20:31:52 nvmf_rdma.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:39.548 20:31:52 nvmf_rdma.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:39.548 20:31:52 nvmf_rdma.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:39.548 20:31:52 nvmf_rdma.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:39.548 20:31:52 nvmf_rdma.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:22:39.548 20:31:52 nvmf_rdma.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:39.548 20:31:52 nvmf_rdma.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:22:39.548 20:31:52 nvmf_rdma.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:39.548 20:31:52 nvmf_rdma.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:39.548 20:31:52 nvmf_rdma.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:39.548 20:31:52 nvmf_rdma.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:39.548 20:31:52 nvmf_rdma.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:39.548 20:31:52 nvmf_rdma.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:39.548 20:31:52 nvmf_rdma.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:39.548 20:31:52 nvmf_rdma.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:39.548 20:31:52 nvmf_rdma.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:22:39.548 20:31:52 nvmf_rdma.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:22:39.548 20:31:52 nvmf_rdma.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:39.548 20:31:52 nvmf_rdma.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:39.548 20:31:52 nvmf_rdma.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:39.548 20:31:52 nvmf_rdma.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:39.548 20:31:52 nvmf_rdma.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:39.548 20:31:52 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:39.548 20:31:52 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:39.548 20:31:52 nvmf_rdma.nvmf_aer -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:39.548 20:31:52 nvmf_rdma.nvmf_aer -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:39.548 20:31:52 nvmf_rdma.nvmf_aer -- nvmf/common.sh@285 -- # xtrace_disable 00:22:39.548 20:31:52 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:46.116 20:31:58 nvmf_rdma.nvmf_aer -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:46.116 20:31:58 nvmf_rdma.nvmf_aer -- nvmf/common.sh@291 -- # pci_devs=() 00:22:46.116 20:31:58 nvmf_rdma.nvmf_aer -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:46.116 20:31:58 nvmf_rdma.nvmf_aer -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:46.116 20:31:58 nvmf_rdma.nvmf_aer -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:46.116 20:31:58 nvmf_rdma.nvmf_aer -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:46.116 20:31:58 nvmf_rdma.nvmf_aer -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:46.116 20:31:58 nvmf_rdma.nvmf_aer -- nvmf/common.sh@295 -- # net_devs=() 00:22:46.116 20:31:58 nvmf_rdma.nvmf_aer -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:46.116 20:31:58 nvmf_rdma.nvmf_aer -- nvmf/common.sh@296 -- # e810=() 00:22:46.116 20:31:58 nvmf_rdma.nvmf_aer -- nvmf/common.sh@296 -- # local -ga e810 00:22:46.116 20:31:58 nvmf_rdma.nvmf_aer -- nvmf/common.sh@297 -- # x722=() 00:22:46.116 20:31:58 nvmf_rdma.nvmf_aer -- nvmf/common.sh@297 -- # local -ga x722 00:22:46.116 20:31:58 nvmf_rdma.nvmf_aer -- nvmf/common.sh@298 -- # mlx=() 00:22:46.116 20:31:58 nvmf_rdma.nvmf_aer -- nvmf/common.sh@298 -- # local -ga mlx 00:22:46.116 20:31:58 nvmf_rdma.nvmf_aer -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:46.116 20:31:58 nvmf_rdma.nvmf_aer -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:46.116 20:31:58 nvmf_rdma.nvmf_aer -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:46.116 20:31:58 nvmf_rdma.nvmf_aer -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:46.116 20:31:58 nvmf_rdma.nvmf_aer -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:46.116 20:31:58 nvmf_rdma.nvmf_aer -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:46.116 20:31:58 nvmf_rdma.nvmf_aer -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:46.116 20:31:58 nvmf_rdma.nvmf_aer -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:46.116 20:31:58 nvmf_rdma.nvmf_aer -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:46.116 20:31:58 nvmf_rdma.nvmf_aer -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:46.116 20:31:58 nvmf_rdma.nvmf_aer -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:46.116 20:31:58 nvmf_rdma.nvmf_aer -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:46.116 20:31:58 nvmf_rdma.nvmf_aer -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:22:46.116 20:31:58 nvmf_rdma.nvmf_aer -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:22:46.116 20:31:58 nvmf_rdma.nvmf_aer -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:22:46.116 20:31:58 nvmf_rdma.nvmf_aer -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:22:46.116 20:31:58 nvmf_rdma.nvmf_aer -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:22:46.116 20:31:58 nvmf_rdma.nvmf_aer -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:46.116 20:31:58 nvmf_rdma.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:46.116 20:31:58 nvmf_rdma.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:22:46.116 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:22:46.116 20:31:58 nvmf_rdma.nvmf_aer -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:22:46.116 20:31:58 nvmf_rdma.nvmf_aer -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:22:46.116 20:31:58 nvmf_rdma.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:22:46.116 20:31:58 nvmf_rdma.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:22:46.116 20:31:58 nvmf_rdma.nvmf_aer -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:22:46.116 20:31:58 nvmf_rdma.nvmf_aer -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:22:46.116 20:31:58 nvmf_rdma.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:46.116 20:31:58 nvmf_rdma.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:22:46.116 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:22:46.116 20:31:58 nvmf_rdma.nvmf_aer -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:22:46.116 20:31:58 nvmf_rdma.nvmf_aer -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:22:46.116 20:31:58 nvmf_rdma.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:22:46.116 20:31:58 nvmf_rdma.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:22:46.116 20:31:58 nvmf_rdma.nvmf_aer -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:22:46.116 20:31:58 nvmf_rdma.nvmf_aer -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:22:46.116 20:31:58 nvmf_rdma.nvmf_aer -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:46.116 20:31:58 nvmf_rdma.nvmf_aer -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:22:46.116 20:31:58 nvmf_rdma.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:46.116 20:31:58 nvmf_rdma.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:46.116 20:31:58 nvmf_rdma.nvmf_aer -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:22:46.116 20:31:58 nvmf_rdma.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:46.116 20:31:58 nvmf_rdma.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:46.116 20:31:58 nvmf_rdma.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:22:46.116 Found net devices under 0000:da:00.0: mlx_0_0 00:22:46.116 20:31:58 nvmf_rdma.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:46.116 20:31:58 nvmf_rdma.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:46.117 20:31:58 nvmf_rdma.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:46.117 20:31:58 nvmf_rdma.nvmf_aer -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:22:46.117 20:31:58 nvmf_rdma.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:46.117 20:31:58 nvmf_rdma.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:46.117 20:31:58 nvmf_rdma.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:22:46.117 Found net devices under 0000:da:00.1: mlx_0_1 00:22:46.117 20:31:58 nvmf_rdma.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:46.117 20:31:58 nvmf_rdma.nvmf_aer -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:46.117 20:31:58 nvmf_rdma.nvmf_aer -- nvmf/common.sh@414 -- # is_hw=yes 00:22:46.117 20:31:58 nvmf_rdma.nvmf_aer -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:46.117 20:31:58 nvmf_rdma.nvmf_aer -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:22:46.117 20:31:58 nvmf_rdma.nvmf_aer -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:22:46.117 20:31:58 nvmf_rdma.nvmf_aer -- nvmf/common.sh@420 -- # rdma_device_init 00:22:46.117 20:31:58 nvmf_rdma.nvmf_aer -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:22:46.117 20:31:58 nvmf_rdma.nvmf_aer -- nvmf/common.sh@58 -- # uname 00:22:46.117 20:31:58 nvmf_rdma.nvmf_aer -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:22:46.117 20:31:58 nvmf_rdma.nvmf_aer -- nvmf/common.sh@62 -- # modprobe ib_cm 00:22:46.117 20:31:58 nvmf_rdma.nvmf_aer -- nvmf/common.sh@63 -- # modprobe ib_core 00:22:46.117 20:31:58 nvmf_rdma.nvmf_aer -- nvmf/common.sh@64 -- # modprobe ib_umad 00:22:46.117 20:31:58 nvmf_rdma.nvmf_aer -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:22:46.117 20:31:58 nvmf_rdma.nvmf_aer -- nvmf/common.sh@66 -- # modprobe iw_cm 00:22:46.117 20:31:58 nvmf_rdma.nvmf_aer -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:22:46.117 20:31:58 nvmf_rdma.nvmf_aer -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:22:46.117 20:31:58 nvmf_rdma.nvmf_aer -- nvmf/common.sh@502 -- # allocate_nic_ips 00:22:46.117 20:31:58 nvmf_rdma.nvmf_aer -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:22:46.117 20:31:58 nvmf_rdma.nvmf_aer -- nvmf/common.sh@73 -- # get_rdma_if_list 00:22:46.117 20:31:58 nvmf_rdma.nvmf_aer -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:22:46.117 20:31:58 nvmf_rdma.nvmf_aer -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:22:46.117 20:31:58 nvmf_rdma.nvmf_aer -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:22:46.117 20:31:58 nvmf_rdma.nvmf_aer -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:22:46.117 20:31:58 nvmf_rdma.nvmf_aer -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:22:46.117 20:31:58 nvmf_rdma.nvmf_aer -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:22:46.117 20:31:58 nvmf_rdma.nvmf_aer -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:46.117 20:31:58 nvmf_rdma.nvmf_aer -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:22:46.117 20:31:58 nvmf_rdma.nvmf_aer -- nvmf/common.sh@104 -- # echo mlx_0_0 00:22:46.117 20:31:58 nvmf_rdma.nvmf_aer -- nvmf/common.sh@105 -- # continue 2 00:22:46.117 20:31:58 nvmf_rdma.nvmf_aer -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:22:46.117 20:31:58 nvmf_rdma.nvmf_aer -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:46.117 20:31:58 nvmf_rdma.nvmf_aer -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:22:46.117 20:31:58 nvmf_rdma.nvmf_aer -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:46.117 20:31:58 nvmf_rdma.nvmf_aer -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:22:46.117 20:31:58 nvmf_rdma.nvmf_aer -- nvmf/common.sh@104 -- # echo mlx_0_1 00:22:46.117 20:31:58 nvmf_rdma.nvmf_aer -- nvmf/common.sh@105 -- # continue 2 00:22:46.117 20:31:58 nvmf_rdma.nvmf_aer -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:22:46.117 20:31:58 nvmf_rdma.nvmf_aer -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:22:46.117 20:31:58 nvmf_rdma.nvmf_aer -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:22:46.117 20:31:58 nvmf_rdma.nvmf_aer -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:22:46.117 20:31:58 nvmf_rdma.nvmf_aer -- nvmf/common.sh@113 -- # awk '{print $4}' 00:22:46.117 20:31:58 nvmf_rdma.nvmf_aer -- nvmf/common.sh@113 -- # cut -d/ -f1 00:22:46.117 20:31:58 nvmf_rdma.nvmf_aer -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:22:46.117 20:31:58 nvmf_rdma.nvmf_aer -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:22:46.117 20:31:58 nvmf_rdma.nvmf_aer -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:22:46.117 262: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:22:46.117 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:22:46.117 altname enp218s0f0np0 00:22:46.117 altname ens818f0np0 00:22:46.117 inet 192.168.100.8/24 scope global mlx_0_0 00:22:46.117 valid_lft forever preferred_lft forever 00:22:46.117 20:31:58 nvmf_rdma.nvmf_aer -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:22:46.117 20:31:58 nvmf_rdma.nvmf_aer -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:22:46.117 20:31:58 nvmf_rdma.nvmf_aer -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:22:46.117 20:31:58 nvmf_rdma.nvmf_aer -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:22:46.117 20:31:58 nvmf_rdma.nvmf_aer -- nvmf/common.sh@113 -- # awk '{print $4}' 00:22:46.117 20:31:58 nvmf_rdma.nvmf_aer -- nvmf/common.sh@113 -- # cut -d/ -f1 00:22:46.117 20:31:58 nvmf_rdma.nvmf_aer -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:22:46.117 20:31:58 nvmf_rdma.nvmf_aer -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:22:46.117 20:31:58 nvmf_rdma.nvmf_aer -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:22:46.117 263: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:22:46.117 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:22:46.117 altname enp218s0f1np1 00:22:46.117 altname ens818f1np1 00:22:46.117 inet 192.168.100.9/24 scope global mlx_0_1 00:22:46.117 valid_lft forever preferred_lft forever 00:22:46.117 20:31:58 nvmf_rdma.nvmf_aer -- nvmf/common.sh@422 -- # return 0 00:22:46.117 20:31:58 nvmf_rdma.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:46.117 20:31:58 nvmf_rdma.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:22:46.117 20:31:58 nvmf_rdma.nvmf_aer -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:22:46.117 20:31:58 nvmf_rdma.nvmf_aer -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:22:46.117 20:31:58 nvmf_rdma.nvmf_aer -- nvmf/common.sh@86 -- # get_rdma_if_list 00:22:46.117 20:31:58 nvmf_rdma.nvmf_aer -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:22:46.117 20:31:58 nvmf_rdma.nvmf_aer -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:22:46.117 20:31:58 nvmf_rdma.nvmf_aer -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:22:46.117 20:31:58 nvmf_rdma.nvmf_aer -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:22:46.117 20:31:58 nvmf_rdma.nvmf_aer -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:22:46.117 20:31:58 nvmf_rdma.nvmf_aer -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:22:46.117 20:31:58 nvmf_rdma.nvmf_aer -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:46.117 20:31:58 nvmf_rdma.nvmf_aer -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:22:46.117 20:31:58 nvmf_rdma.nvmf_aer -- nvmf/common.sh@104 -- # echo mlx_0_0 00:22:46.117 20:31:58 nvmf_rdma.nvmf_aer -- nvmf/common.sh@105 -- # continue 2 00:22:46.117 20:31:58 nvmf_rdma.nvmf_aer -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:22:46.117 20:31:58 nvmf_rdma.nvmf_aer -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:46.117 20:31:58 nvmf_rdma.nvmf_aer -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:22:46.117 20:31:58 nvmf_rdma.nvmf_aer -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:46.117 20:31:58 nvmf_rdma.nvmf_aer -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:22:46.117 20:31:58 nvmf_rdma.nvmf_aer -- nvmf/common.sh@104 -- # echo mlx_0_1 00:22:46.117 20:31:58 nvmf_rdma.nvmf_aer -- nvmf/common.sh@105 -- # continue 2 00:22:46.117 20:31:58 nvmf_rdma.nvmf_aer -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:22:46.117 20:31:58 nvmf_rdma.nvmf_aer -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:22:46.117 20:31:58 nvmf_rdma.nvmf_aer -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:22:46.117 20:31:58 nvmf_rdma.nvmf_aer -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:22:46.117 20:31:58 nvmf_rdma.nvmf_aer -- nvmf/common.sh@113 -- # awk '{print $4}' 00:22:46.117 20:31:58 nvmf_rdma.nvmf_aer -- nvmf/common.sh@113 -- # cut -d/ -f1 00:22:46.117 20:31:58 nvmf_rdma.nvmf_aer -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:22:46.117 20:31:58 nvmf_rdma.nvmf_aer -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:22:46.117 20:31:58 nvmf_rdma.nvmf_aer -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:22:46.117 20:31:58 nvmf_rdma.nvmf_aer -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:22:46.117 20:31:58 nvmf_rdma.nvmf_aer -- nvmf/common.sh@113 -- # awk '{print $4}' 00:22:46.117 20:31:58 nvmf_rdma.nvmf_aer -- nvmf/common.sh@113 -- # cut -d/ -f1 00:22:46.117 20:31:58 nvmf_rdma.nvmf_aer -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:22:46.117 192.168.100.9' 00:22:46.117 20:31:58 nvmf_rdma.nvmf_aer -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:22:46.117 192.168.100.9' 00:22:46.117 20:31:58 nvmf_rdma.nvmf_aer -- nvmf/common.sh@457 -- # head -n 1 00:22:46.117 20:31:58 nvmf_rdma.nvmf_aer -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:22:46.117 20:31:58 nvmf_rdma.nvmf_aer -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:22:46.117 192.168.100.9' 00:22:46.117 20:31:58 nvmf_rdma.nvmf_aer -- nvmf/common.sh@458 -- # tail -n +2 00:22:46.117 20:31:58 nvmf_rdma.nvmf_aer -- nvmf/common.sh@458 -- # head -n 1 00:22:46.117 20:31:58 nvmf_rdma.nvmf_aer -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:22:46.117 20:31:58 nvmf_rdma.nvmf_aer -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:22:46.117 20:31:58 nvmf_rdma.nvmf_aer -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:22:46.117 20:31:58 nvmf_rdma.nvmf_aer -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:22:46.117 20:31:58 nvmf_rdma.nvmf_aer -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:22:46.117 20:31:58 nvmf_rdma.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:22:46.118 20:31:58 nvmf_rdma.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:22:46.118 20:31:58 nvmf_rdma.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:46.118 20:31:58 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@720 -- # xtrace_disable 00:22:46.118 20:31:58 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:46.118 20:31:58 nvmf_rdma.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=3134623 00:22:46.118 20:31:58 nvmf_rdma.nvmf_aer -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:46.118 20:31:58 nvmf_rdma.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 3134623 00:22:46.118 20:31:58 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@827 -- # '[' -z 3134623 ']' 00:22:46.118 20:31:58 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:46.118 20:31:58 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:46.118 20:31:58 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:46.118 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:46.118 20:31:58 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:46.118 20:31:58 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:46.118 [2024-05-16 20:31:58.474706] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:22:46.118 [2024-05-16 20:31:58.474754] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:46.118 EAL: No free 2048 kB hugepages reported on node 1 00:22:46.118 [2024-05-16 20:31:58.537721] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:46.118 [2024-05-16 20:31:58.615781] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:46.118 [2024-05-16 20:31:58.615820] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:46.118 [2024-05-16 20:31:58.615827] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:46.118 [2024-05-16 20:31:58.615834] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:46.118 [2024-05-16 20:31:58.615838] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:46.118 [2024-05-16 20:31:58.615892] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:46.118 [2024-05-16 20:31:58.615916] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:46.118 [2024-05-16 20:31:58.616007] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:22:46.118 [2024-05-16 20:31:58.616008] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:46.377 20:31:59 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:46.377 20:31:59 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@860 -- # return 0 00:22:46.377 20:31:59 nvmf_rdma.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:46.377 20:31:59 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:46.377 20:31:59 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:46.377 20:31:59 nvmf_rdma.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:46.377 20:31:59 nvmf_rdma.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:22:46.377 20:31:59 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:46.377 20:31:59 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:46.377 [2024-05-16 20:31:59.347843] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xb099b0/0xb0dea0) succeed. 00:22:46.377 [2024-05-16 20:31:59.358148] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xb0aff0/0xb4f530) succeed. 00:22:46.636 20:31:59 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:46.636 20:31:59 nvmf_rdma.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:22:46.636 20:31:59 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:46.636 20:31:59 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:46.636 Malloc0 00:22:46.636 20:31:59 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:46.636 20:31:59 nvmf_rdma.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:22:46.636 20:31:59 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:46.636 20:31:59 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:46.636 20:31:59 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:46.636 20:31:59 nvmf_rdma.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:46.636 20:31:59 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:46.636 20:31:59 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:46.636 20:31:59 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:46.637 20:31:59 nvmf_rdma.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:22:46.637 20:31:59 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:46.637 20:31:59 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:46.637 [2024-05-16 20:31:59.525198] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:22:46.637 [2024-05-16 20:31:59.525598] rdma.c:3032:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:22:46.637 20:31:59 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:46.637 20:31:59 nvmf_rdma.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:22:46.637 20:31:59 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:46.637 20:31:59 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:46.637 [ 00:22:46.637 { 00:22:46.637 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:22:46.637 "subtype": "Discovery", 00:22:46.637 "listen_addresses": [], 00:22:46.637 "allow_any_host": true, 00:22:46.637 "hosts": [] 00:22:46.637 }, 00:22:46.637 { 00:22:46.637 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:46.637 "subtype": "NVMe", 00:22:46.637 "listen_addresses": [ 00:22:46.637 { 00:22:46.637 "trtype": "RDMA", 00:22:46.637 "adrfam": "IPv4", 00:22:46.637 "traddr": "192.168.100.8", 00:22:46.637 "trsvcid": "4420" 00:22:46.637 } 00:22:46.637 ], 00:22:46.637 "allow_any_host": true, 00:22:46.637 "hosts": [], 00:22:46.637 "serial_number": "SPDK00000000000001", 00:22:46.637 "model_number": "SPDK bdev Controller", 00:22:46.637 "max_namespaces": 2, 00:22:46.637 "min_cntlid": 1, 00:22:46.637 "max_cntlid": 65519, 00:22:46.637 "namespaces": [ 00:22:46.637 { 00:22:46.637 "nsid": 1, 00:22:46.637 "bdev_name": "Malloc0", 00:22:46.637 "name": "Malloc0", 00:22:46.637 "nguid": "7897380C28AD40269E33C7DABCAE110C", 00:22:46.637 "uuid": "7897380c-28ad-4026-9e33-c7dabcae110c" 00:22:46.637 } 00:22:46.637 ] 00:22:46.637 } 00:22:46.637 ] 00:22:46.637 20:31:59 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:46.637 20:31:59 nvmf_rdma.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:22:46.637 20:31:59 nvmf_rdma.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:22:46.637 20:31:59 nvmf_rdma.nvmf_aer -- host/aer.sh@33 -- # aerpid=3134865 00:22:46.637 20:31:59 nvmf_rdma.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:22:46.637 20:31:59 nvmf_rdma.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:22:46.637 20:31:59 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@1261 -- # local i=0 00:22:46.637 20:31:59 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:22:46.637 20:31:59 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@1263 -- # '[' 0 -lt 200 ']' 00:22:46.637 20:31:59 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@1264 -- # i=1 00:22:46.637 20:31:59 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@1265 -- # sleep 0.1 00:22:46.637 EAL: No free 2048 kB hugepages reported on node 1 00:22:46.896 20:31:59 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:22:46.896 20:31:59 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@1263 -- # '[' 1 -lt 200 ']' 00:22:46.896 20:31:59 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@1264 -- # i=2 00:22:46.896 20:31:59 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@1265 -- # sleep 0.1 00:22:46.896 20:31:59 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:22:46.896 20:31:59 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:22:46.896 20:31:59 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@1272 -- # return 0 00:22:46.896 20:31:59 nvmf_rdma.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:22:46.896 20:31:59 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:46.896 20:31:59 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:46.896 Malloc1 00:22:46.896 20:31:59 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:46.896 20:31:59 nvmf_rdma.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:22:46.896 20:31:59 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:46.896 20:31:59 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:46.896 20:31:59 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:46.896 20:31:59 nvmf_rdma.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:22:46.896 20:31:59 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:46.896 20:31:59 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:46.896 [ 00:22:46.896 { 00:22:46.896 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:22:46.896 "subtype": "Discovery", 00:22:46.896 "listen_addresses": [], 00:22:46.896 "allow_any_host": true, 00:22:46.896 "hosts": [] 00:22:46.896 }, 00:22:46.896 { 00:22:46.896 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:46.896 "subtype": "NVMe", 00:22:46.896 "listen_addresses": [ 00:22:46.896 { 00:22:46.896 "trtype": "RDMA", 00:22:46.896 "adrfam": "IPv4", 00:22:46.896 "traddr": "192.168.100.8", 00:22:46.896 "trsvcid": "4420" 00:22:46.896 } 00:22:46.896 ], 00:22:46.896 "allow_any_host": true, 00:22:46.896 "hosts": [], 00:22:46.896 "serial_number": "SPDK00000000000001", 00:22:46.896 "model_number": "SPDK bdev Controller", 00:22:46.896 "max_namespaces": 2, 00:22:46.896 "min_cntlid": 1, 00:22:46.896 "max_cntlid": 65519, 00:22:46.896 "namespaces": [ 00:22:46.896 { 00:22:46.896 "nsid": 1, 00:22:46.896 "bdev_name": "Malloc0", 00:22:46.896 "name": "Malloc0", 00:22:46.896 "nguid": "7897380C28AD40269E33C7DABCAE110C", 00:22:46.896 "uuid": "7897380c-28ad-4026-9e33-c7dabcae110c" 00:22:46.896 }, 00:22:46.896 { 00:22:46.896 "nsid": 2, 00:22:46.896 "bdev_name": "Malloc1", 00:22:46.896 "name": "Malloc1", 00:22:46.896 "nguid": "3871220713D54869B75BCDA41CA049E6", 00:22:46.896 "uuid": "38712207-13d5-4869-b75b-cda41ca049e6" 00:22:46.896 } 00:22:46.896 ] 00:22:46.896 } 00:22:46.896 ] 00:22:46.896 20:31:59 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:46.896 20:31:59 nvmf_rdma.nvmf_aer -- host/aer.sh@43 -- # wait 3134865 00:22:46.896 Asynchronous Event Request test 00:22:46.896 Attaching to 192.168.100.8 00:22:46.896 Attached to 192.168.100.8 00:22:46.896 Registering asynchronous event callbacks... 00:22:46.896 Starting namespace attribute notice tests for all controllers... 00:22:46.896 192.168.100.8: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:22:46.896 aer_cb - Changed Namespace 00:22:46.896 Cleaning up... 00:22:46.896 20:31:59 nvmf_rdma.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:22:46.896 20:31:59 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:46.896 20:31:59 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:46.896 20:31:59 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:46.896 20:31:59 nvmf_rdma.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:22:46.896 20:31:59 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:46.896 20:31:59 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:47.155 20:31:59 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:47.155 20:31:59 nvmf_rdma.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:47.155 20:31:59 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:47.155 20:31:59 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:47.155 20:31:59 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:47.155 20:31:59 nvmf_rdma.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:22:47.155 20:31:59 nvmf_rdma.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:22:47.155 20:31:59 nvmf_rdma.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:47.155 20:31:59 nvmf_rdma.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:22:47.155 20:31:59 nvmf_rdma.nvmf_aer -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:22:47.155 20:31:59 nvmf_rdma.nvmf_aer -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:22:47.155 20:31:59 nvmf_rdma.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:22:47.155 20:31:59 nvmf_rdma.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:47.155 20:31:59 nvmf_rdma.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:22:47.155 rmmod nvme_rdma 00:22:47.155 rmmod nvme_fabrics 00:22:47.155 20:31:59 nvmf_rdma.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:47.155 20:31:59 nvmf_rdma.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:22:47.155 20:31:59 nvmf_rdma.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:22:47.155 20:31:59 nvmf_rdma.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 3134623 ']' 00:22:47.155 20:31:59 nvmf_rdma.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 3134623 00:22:47.155 20:31:59 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@946 -- # '[' -z 3134623 ']' 00:22:47.155 20:31:59 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@950 -- # kill -0 3134623 00:22:47.155 20:31:59 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@951 -- # uname 00:22:47.155 20:31:59 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:47.155 20:31:59 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3134623 00:22:47.155 20:31:59 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:22:47.155 20:32:00 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:22:47.155 20:32:00 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3134623' 00:22:47.155 killing process with pid 3134623 00:22:47.155 20:32:00 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@965 -- # kill 3134623 00:22:47.155 [2024-05-16 20:32:00.002737] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:22:47.155 20:32:00 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@970 -- # wait 3134623 00:22:47.155 [2024-05-16 20:32:00.081786] rdma.c:2885:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:22:47.414 20:32:00 nvmf_rdma.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:47.414 20:32:00 nvmf_rdma.nvmf_aer -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:22:47.414 00:22:47.414 real 0m8.024s 00:22:47.414 user 0m8.152s 00:22:47.414 sys 0m5.033s 00:22:47.414 20:32:00 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@1122 -- # xtrace_disable 00:22:47.414 20:32:00 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:47.414 ************************************ 00:22:47.414 END TEST nvmf_aer 00:22:47.414 ************************************ 00:22:47.414 20:32:00 nvmf_rdma -- nvmf/nvmf.sh@92 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=rdma 00:22:47.414 20:32:00 nvmf_rdma -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:22:47.414 20:32:00 nvmf_rdma -- common/autotest_common.sh@1103 -- # xtrace_disable 00:22:47.414 20:32:00 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:22:47.414 ************************************ 00:22:47.414 START TEST nvmf_async_init 00:22:47.414 ************************************ 00:22:47.414 20:32:00 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=rdma 00:22:47.414 * Looking for test storage... 00:22:47.673 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:22:47.673 20:32:00 nvmf_rdma.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:22:47.673 20:32:00 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:22:47.673 20:32:00 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:47.673 20:32:00 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:47.673 20:32:00 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:47.673 20:32:00 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:47.673 20:32:00 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:47.673 20:32:00 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:47.673 20:32:00 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:47.673 20:32:00 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:47.673 20:32:00 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:47.673 20:32:00 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:47.673 20:32:00 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:22:47.673 20:32:00 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:22:47.673 20:32:00 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:47.673 20:32:00 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:47.673 20:32:00 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:47.673 20:32:00 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:47.673 20:32:00 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:22:47.673 20:32:00 nvmf_rdma.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:47.673 20:32:00 nvmf_rdma.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:47.673 20:32:00 nvmf_rdma.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:47.673 20:32:00 nvmf_rdma.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:47.673 20:32:00 nvmf_rdma.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:47.673 20:32:00 nvmf_rdma.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:47.673 20:32:00 nvmf_rdma.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:22:47.673 20:32:00 nvmf_rdma.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:47.673 20:32:00 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:22:47.673 20:32:00 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:47.673 20:32:00 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:47.673 20:32:00 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:47.673 20:32:00 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:47.673 20:32:00 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:47.673 20:32:00 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:47.673 20:32:00 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:47.673 20:32:00 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:47.673 20:32:00 nvmf_rdma.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:22:47.673 20:32:00 nvmf_rdma.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:22:47.673 20:32:00 nvmf_rdma.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:22:47.673 20:32:00 nvmf_rdma.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:22:47.673 20:32:00 nvmf_rdma.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:22:47.673 20:32:00 nvmf_rdma.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:22:47.673 20:32:00 nvmf_rdma.nvmf_async_init -- host/async_init.sh@20 -- # nguid=e30677c577d54da6a08ec47bf84dac04 00:22:47.673 20:32:00 nvmf_rdma.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:22:47.673 20:32:00 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:22:47.673 20:32:00 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:47.673 20:32:00 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:47.673 20:32:00 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:47.673 20:32:00 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:47.673 20:32:00 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:47.673 20:32:00 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:47.673 20:32:00 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:47.673 20:32:00 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:47.673 20:32:00 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:47.673 20:32:00 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@285 -- # xtrace_disable 00:22:47.673 20:32:00 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:54.448 20:32:06 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:54.448 20:32:06 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@291 -- # pci_devs=() 00:22:54.448 20:32:06 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:54.448 20:32:06 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:54.448 20:32:06 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:54.448 20:32:06 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:54.448 20:32:06 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:54.448 20:32:06 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@295 -- # net_devs=() 00:22:54.448 20:32:06 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:54.448 20:32:06 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@296 -- # e810=() 00:22:54.448 20:32:06 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@296 -- # local -ga e810 00:22:54.448 20:32:06 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@297 -- # x722=() 00:22:54.448 20:32:06 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@297 -- # local -ga x722 00:22:54.448 20:32:06 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@298 -- # mlx=() 00:22:54.448 20:32:06 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@298 -- # local -ga mlx 00:22:54.448 20:32:06 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:54.448 20:32:06 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:54.448 20:32:06 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:54.448 20:32:06 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:54.448 20:32:06 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:54.448 20:32:06 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:54.448 20:32:06 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:54.448 20:32:06 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:54.448 20:32:06 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:54.448 20:32:06 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:54.448 20:32:06 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:54.448 20:32:06 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:54.448 20:32:06 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:22:54.448 20:32:06 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:22:54.448 20:32:06 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:22:54.448 20:32:06 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:22:54.448 20:32:06 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:22:54.448 20:32:06 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:54.448 20:32:06 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:54.448 20:32:06 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:22:54.448 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:22:54.448 20:32:06 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:22:54.448 20:32:06 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:22:54.448 20:32:06 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:22:54.448 20:32:06 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:22:54.448 20:32:06 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:22:54.448 20:32:06 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:22:54.448 20:32:06 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:54.448 20:32:06 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:22:54.448 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:22:54.448 20:32:06 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:22:54.448 20:32:06 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:22:54.448 20:32:06 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:22:54.448 20:32:06 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:22:54.448 20:32:06 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:22:54.448 20:32:06 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:22:54.448 20:32:06 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:54.448 20:32:06 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:22:54.448 20:32:06 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:54.448 20:32:06 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:54.448 20:32:06 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:22:54.448 20:32:06 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:54.448 20:32:06 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:54.448 20:32:06 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:22:54.448 Found net devices under 0000:da:00.0: mlx_0_0 00:22:54.448 20:32:06 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:54.448 20:32:06 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:54.448 20:32:06 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:54.448 20:32:06 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:22:54.449 20:32:06 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:54.449 20:32:06 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:54.449 20:32:06 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:22:54.449 Found net devices under 0000:da:00.1: mlx_0_1 00:22:54.449 20:32:06 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:54.449 20:32:06 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:54.449 20:32:06 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@414 -- # is_hw=yes 00:22:54.449 20:32:06 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:54.449 20:32:06 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:22:54.449 20:32:06 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:22:54.449 20:32:06 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@420 -- # rdma_device_init 00:22:54.449 20:32:06 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:22:54.449 20:32:06 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@58 -- # uname 00:22:54.449 20:32:06 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:22:54.449 20:32:06 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@62 -- # modprobe ib_cm 00:22:54.449 20:32:06 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@63 -- # modprobe ib_core 00:22:54.449 20:32:06 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@64 -- # modprobe ib_umad 00:22:54.449 20:32:06 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:22:54.449 20:32:06 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@66 -- # modprobe iw_cm 00:22:54.449 20:32:06 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:22:54.449 20:32:06 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:22:54.449 20:32:06 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@502 -- # allocate_nic_ips 00:22:54.449 20:32:06 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:22:54.449 20:32:06 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@73 -- # get_rdma_if_list 00:22:54.449 20:32:06 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:22:54.449 20:32:06 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:22:54.449 20:32:06 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:22:54.449 20:32:06 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:22:54.449 20:32:06 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:22:54.449 20:32:06 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:22:54.449 20:32:06 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:54.449 20:32:06 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:22:54.449 20:32:06 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@104 -- # echo mlx_0_0 00:22:54.449 20:32:06 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@105 -- # continue 2 00:22:54.449 20:32:06 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:22:54.449 20:32:06 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:54.449 20:32:06 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:22:54.449 20:32:06 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:54.449 20:32:06 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:22:54.449 20:32:06 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@104 -- # echo mlx_0_1 00:22:54.449 20:32:06 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@105 -- # continue 2 00:22:54.449 20:32:06 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:22:54.449 20:32:06 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:22:54.449 20:32:06 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:22:54.449 20:32:06 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:22:54.449 20:32:06 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@113 -- # awk '{print $4}' 00:22:54.449 20:32:06 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@113 -- # cut -d/ -f1 00:22:54.449 20:32:06 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:22:54.449 20:32:06 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:22:54.449 20:32:06 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:22:54.449 262: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:22:54.449 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:22:54.449 altname enp218s0f0np0 00:22:54.449 altname ens818f0np0 00:22:54.449 inet 192.168.100.8/24 scope global mlx_0_0 00:22:54.449 valid_lft forever preferred_lft forever 00:22:54.449 20:32:06 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:22:54.449 20:32:06 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:22:54.449 20:32:06 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:22:54.449 20:32:06 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:22:54.449 20:32:06 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@113 -- # awk '{print $4}' 00:22:54.449 20:32:06 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@113 -- # cut -d/ -f1 00:22:54.449 20:32:06 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:22:54.449 20:32:06 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:22:54.449 20:32:06 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:22:54.449 263: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:22:54.449 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:22:54.449 altname enp218s0f1np1 00:22:54.449 altname ens818f1np1 00:22:54.449 inet 192.168.100.9/24 scope global mlx_0_1 00:22:54.449 valid_lft forever preferred_lft forever 00:22:54.449 20:32:06 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@422 -- # return 0 00:22:54.449 20:32:06 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:54.449 20:32:06 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:22:54.449 20:32:06 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:22:54.449 20:32:06 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:22:54.449 20:32:06 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@86 -- # get_rdma_if_list 00:22:54.449 20:32:06 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:22:54.449 20:32:06 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:22:54.449 20:32:06 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:22:54.449 20:32:06 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:22:54.449 20:32:06 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:22:54.449 20:32:06 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:22:54.449 20:32:06 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:54.449 20:32:06 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:22:54.449 20:32:06 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@104 -- # echo mlx_0_0 00:22:54.449 20:32:06 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@105 -- # continue 2 00:22:54.449 20:32:06 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:22:54.449 20:32:06 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:54.449 20:32:06 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:22:54.449 20:32:06 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:54.449 20:32:06 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:22:54.449 20:32:06 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@104 -- # echo mlx_0_1 00:22:54.449 20:32:06 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@105 -- # continue 2 00:22:54.449 20:32:06 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:22:54.449 20:32:06 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:22:54.449 20:32:06 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:22:54.449 20:32:06 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:22:54.449 20:32:06 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@113 -- # awk '{print $4}' 00:22:54.449 20:32:06 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@113 -- # cut -d/ -f1 00:22:54.449 20:32:06 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:22:54.449 20:32:06 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:22:54.449 20:32:06 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:22:54.449 20:32:06 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:22:54.449 20:32:06 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@113 -- # awk '{print $4}' 00:22:54.449 20:32:06 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@113 -- # cut -d/ -f1 00:22:54.449 20:32:06 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:22:54.449 192.168.100.9' 00:22:54.449 20:32:06 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:22:54.449 192.168.100.9' 00:22:54.449 20:32:06 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@457 -- # head -n 1 00:22:54.449 20:32:06 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:22:54.449 20:32:06 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:22:54.449 192.168.100.9' 00:22:54.449 20:32:06 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@458 -- # tail -n +2 00:22:54.449 20:32:06 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@458 -- # head -n 1 00:22:54.449 20:32:06 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:22:54.449 20:32:06 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:22:54.449 20:32:06 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:22:54.449 20:32:06 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:22:54.449 20:32:06 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:22:54.449 20:32:06 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:22:54.449 20:32:06 nvmf_rdma.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:22:54.449 20:32:06 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:54.449 20:32:06 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@720 -- # xtrace_disable 00:22:54.449 20:32:06 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:54.449 20:32:06 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=3138443 00:22:54.449 20:32:06 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 3138443 00:22:54.449 20:32:06 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@827 -- # '[' -z 3138443 ']' 00:22:54.449 20:32:06 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:54.449 20:32:06 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:54.449 20:32:06 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:54.450 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:54.450 20:32:06 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:54.450 20:32:06 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:54.450 20:32:06 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:22:54.450 [2024-05-16 20:32:06.474893] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:22:54.450 [2024-05-16 20:32:06.474939] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:54.450 EAL: No free 2048 kB hugepages reported on node 1 00:22:54.450 [2024-05-16 20:32:06.534517] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:54.450 [2024-05-16 20:32:06.612181] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:54.450 [2024-05-16 20:32:06.612218] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:54.450 [2024-05-16 20:32:06.612225] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:54.450 [2024-05-16 20:32:06.612231] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:54.450 [2024-05-16 20:32:06.612236] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:54.450 [2024-05-16 20:32:06.612253] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:54.450 20:32:07 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:54.450 20:32:07 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@860 -- # return 0 00:22:54.450 20:32:07 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:54.450 20:32:07 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:54.450 20:32:07 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:54.450 20:32:07 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:54.450 20:32:07 nvmf_rdma.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:22:54.450 20:32:07 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:54.450 20:32:07 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:54.450 [2024-05-16 20:32:07.326005] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x13227e0/0x1326cd0) succeed. 00:22:54.450 [2024-05-16 20:32:07.335001] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1323ce0/0x1368360) succeed. 00:22:54.450 20:32:07 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:54.450 20:32:07 nvmf_rdma.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:22:54.450 20:32:07 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:54.450 20:32:07 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:54.450 null0 00:22:54.450 20:32:07 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:54.450 20:32:07 nvmf_rdma.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:22:54.450 20:32:07 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:54.450 20:32:07 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:54.450 20:32:07 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:54.450 20:32:07 nvmf_rdma.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:22:54.450 20:32:07 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:54.450 20:32:07 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:54.450 20:32:07 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:54.450 20:32:07 nvmf_rdma.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g e30677c577d54da6a08ec47bf84dac04 00:22:54.450 20:32:07 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:54.450 20:32:07 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:54.450 20:32:07 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:54.450 20:32:07 nvmf_rdma.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:22:54.450 20:32:07 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:54.450 20:32:07 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:54.450 [2024-05-16 20:32:07.423381] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:22:54.450 [2024-05-16 20:32:07.423677] rdma.c:3032:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:22:54.450 20:32:07 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:54.450 20:32:07 nvmf_rdma.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -a 192.168.100.8 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:22:54.450 20:32:07 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:54.450 20:32:07 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:54.709 nvme0n1 00:22:54.709 20:32:07 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:54.709 20:32:07 nvmf_rdma.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:22:54.709 20:32:07 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:54.709 20:32:07 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:54.709 [ 00:22:54.709 { 00:22:54.709 "name": "nvme0n1", 00:22:54.709 "aliases": [ 00:22:54.709 "e30677c5-77d5-4da6-a08e-c47bf84dac04" 00:22:54.709 ], 00:22:54.709 "product_name": "NVMe disk", 00:22:54.709 "block_size": 512, 00:22:54.709 "num_blocks": 2097152, 00:22:54.709 "uuid": "e30677c5-77d5-4da6-a08e-c47bf84dac04", 00:22:54.709 "assigned_rate_limits": { 00:22:54.709 "rw_ios_per_sec": 0, 00:22:54.709 "rw_mbytes_per_sec": 0, 00:22:54.709 "r_mbytes_per_sec": 0, 00:22:54.709 "w_mbytes_per_sec": 0 00:22:54.709 }, 00:22:54.709 "claimed": false, 00:22:54.709 "zoned": false, 00:22:54.709 "supported_io_types": { 00:22:54.709 "read": true, 00:22:54.709 "write": true, 00:22:54.709 "unmap": false, 00:22:54.709 "write_zeroes": true, 00:22:54.709 "flush": true, 00:22:54.709 "reset": true, 00:22:54.709 "compare": true, 00:22:54.709 "compare_and_write": true, 00:22:54.709 "abort": true, 00:22:54.709 "nvme_admin": true, 00:22:54.709 "nvme_io": true 00:22:54.709 }, 00:22:54.709 "memory_domains": [ 00:22:54.709 { 00:22:54.709 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:22:54.709 "dma_device_type": 0 00:22:54.709 } 00:22:54.709 ], 00:22:54.709 "driver_specific": { 00:22:54.709 "nvme": [ 00:22:54.709 { 00:22:54.709 "trid": { 00:22:54.709 "trtype": "RDMA", 00:22:54.709 "adrfam": "IPv4", 00:22:54.709 "traddr": "192.168.100.8", 00:22:54.709 "trsvcid": "4420", 00:22:54.709 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:22:54.709 }, 00:22:54.709 "ctrlr_data": { 00:22:54.709 "cntlid": 1, 00:22:54.709 "vendor_id": "0x8086", 00:22:54.709 "model_number": "SPDK bdev Controller", 00:22:54.709 "serial_number": "00000000000000000000", 00:22:54.709 "firmware_revision": "24.09", 00:22:54.709 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:54.709 "oacs": { 00:22:54.709 "security": 0, 00:22:54.709 "format": 0, 00:22:54.709 "firmware": 0, 00:22:54.709 "ns_manage": 0 00:22:54.709 }, 00:22:54.709 "multi_ctrlr": true, 00:22:54.709 "ana_reporting": false 00:22:54.709 }, 00:22:54.709 "vs": { 00:22:54.709 "nvme_version": "1.3" 00:22:54.709 }, 00:22:54.709 "ns_data": { 00:22:54.709 "id": 1, 00:22:54.709 "can_share": true 00:22:54.709 } 00:22:54.709 } 00:22:54.709 ], 00:22:54.709 "mp_policy": "active_passive" 00:22:54.709 } 00:22:54.709 } 00:22:54.709 ] 00:22:54.709 20:32:07 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:54.709 20:32:07 nvmf_rdma.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:22:54.709 20:32:07 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:54.709 20:32:07 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:54.709 [2024-05-16 20:32:07.525315] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:54.709 [2024-05-16 20:32:07.551046] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:22:54.709 [2024-05-16 20:32:07.572050] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:54.709 20:32:07 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:54.709 20:32:07 nvmf_rdma.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:22:54.709 20:32:07 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:54.709 20:32:07 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:54.709 [ 00:22:54.709 { 00:22:54.709 "name": "nvme0n1", 00:22:54.709 "aliases": [ 00:22:54.709 "e30677c5-77d5-4da6-a08e-c47bf84dac04" 00:22:54.709 ], 00:22:54.709 "product_name": "NVMe disk", 00:22:54.709 "block_size": 512, 00:22:54.709 "num_blocks": 2097152, 00:22:54.709 "uuid": "e30677c5-77d5-4da6-a08e-c47bf84dac04", 00:22:54.709 "assigned_rate_limits": { 00:22:54.709 "rw_ios_per_sec": 0, 00:22:54.709 "rw_mbytes_per_sec": 0, 00:22:54.709 "r_mbytes_per_sec": 0, 00:22:54.709 "w_mbytes_per_sec": 0 00:22:54.709 }, 00:22:54.710 "claimed": false, 00:22:54.710 "zoned": false, 00:22:54.710 "supported_io_types": { 00:22:54.710 "read": true, 00:22:54.710 "write": true, 00:22:54.710 "unmap": false, 00:22:54.710 "write_zeroes": true, 00:22:54.710 "flush": true, 00:22:54.710 "reset": true, 00:22:54.710 "compare": true, 00:22:54.710 "compare_and_write": true, 00:22:54.710 "abort": true, 00:22:54.710 "nvme_admin": true, 00:22:54.710 "nvme_io": true 00:22:54.710 }, 00:22:54.710 "memory_domains": [ 00:22:54.710 { 00:22:54.710 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:22:54.710 "dma_device_type": 0 00:22:54.710 } 00:22:54.710 ], 00:22:54.710 "driver_specific": { 00:22:54.710 "nvme": [ 00:22:54.710 { 00:22:54.710 "trid": { 00:22:54.710 "trtype": "RDMA", 00:22:54.710 "adrfam": "IPv4", 00:22:54.710 "traddr": "192.168.100.8", 00:22:54.710 "trsvcid": "4420", 00:22:54.710 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:22:54.710 }, 00:22:54.710 "ctrlr_data": { 00:22:54.710 "cntlid": 2, 00:22:54.710 "vendor_id": "0x8086", 00:22:54.710 "model_number": "SPDK bdev Controller", 00:22:54.710 "serial_number": "00000000000000000000", 00:22:54.710 "firmware_revision": "24.09", 00:22:54.710 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:54.710 "oacs": { 00:22:54.710 "security": 0, 00:22:54.710 "format": 0, 00:22:54.710 "firmware": 0, 00:22:54.710 "ns_manage": 0 00:22:54.710 }, 00:22:54.710 "multi_ctrlr": true, 00:22:54.710 "ana_reporting": false 00:22:54.710 }, 00:22:54.710 "vs": { 00:22:54.710 "nvme_version": "1.3" 00:22:54.710 }, 00:22:54.710 "ns_data": { 00:22:54.710 "id": 1, 00:22:54.710 "can_share": true 00:22:54.710 } 00:22:54.710 } 00:22:54.710 ], 00:22:54.710 "mp_policy": "active_passive" 00:22:54.710 } 00:22:54.710 } 00:22:54.710 ] 00:22:54.710 20:32:07 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:54.710 20:32:07 nvmf_rdma.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:54.710 20:32:07 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:54.710 20:32:07 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:54.710 20:32:07 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:54.710 20:32:07 nvmf_rdma.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:22:54.710 20:32:07 nvmf_rdma.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.ScKkZwvTbl 00:22:54.710 20:32:07 nvmf_rdma.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:22:54.710 20:32:07 nvmf_rdma.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.ScKkZwvTbl 00:22:54.710 20:32:07 nvmf_rdma.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:22:54.710 20:32:07 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:54.710 20:32:07 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:54.710 20:32:07 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:54.710 20:32:07 nvmf_rdma.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4421 --secure-channel 00:22:54.710 20:32:07 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:54.710 20:32:07 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:54.710 [2024-05-16 20:32:07.642910] rdma.c:3032:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:22:54.710 20:32:07 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:54.710 20:32:07 nvmf_rdma.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.ScKkZwvTbl 00:22:54.710 20:32:07 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:54.710 20:32:07 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:54.710 20:32:07 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:54.710 20:32:07 nvmf_rdma.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -a 192.168.100.8 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.ScKkZwvTbl 00:22:54.710 20:32:07 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:54.710 20:32:07 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:54.710 [2024-05-16 20:32:07.658937] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:54.969 nvme0n1 00:22:54.969 20:32:07 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:54.969 20:32:07 nvmf_rdma.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:22:54.969 20:32:07 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:54.969 20:32:07 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:54.969 [ 00:22:54.969 { 00:22:54.969 "name": "nvme0n1", 00:22:54.970 "aliases": [ 00:22:54.970 "e30677c5-77d5-4da6-a08e-c47bf84dac04" 00:22:54.970 ], 00:22:54.970 "product_name": "NVMe disk", 00:22:54.970 "block_size": 512, 00:22:54.970 "num_blocks": 2097152, 00:22:54.970 "uuid": "e30677c5-77d5-4da6-a08e-c47bf84dac04", 00:22:54.970 "assigned_rate_limits": { 00:22:54.970 "rw_ios_per_sec": 0, 00:22:54.970 "rw_mbytes_per_sec": 0, 00:22:54.970 "r_mbytes_per_sec": 0, 00:22:54.970 "w_mbytes_per_sec": 0 00:22:54.970 }, 00:22:54.970 "claimed": false, 00:22:54.970 "zoned": false, 00:22:54.970 "supported_io_types": { 00:22:54.970 "read": true, 00:22:54.970 "write": true, 00:22:54.970 "unmap": false, 00:22:54.970 "write_zeroes": true, 00:22:54.970 "flush": true, 00:22:54.970 "reset": true, 00:22:54.970 "compare": true, 00:22:54.970 "compare_and_write": true, 00:22:54.970 "abort": true, 00:22:54.970 "nvme_admin": true, 00:22:54.970 "nvme_io": true 00:22:54.970 }, 00:22:54.970 "memory_domains": [ 00:22:54.970 { 00:22:54.970 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:22:54.970 "dma_device_type": 0 00:22:54.970 } 00:22:54.970 ], 00:22:54.970 "driver_specific": { 00:22:54.970 "nvme": [ 00:22:54.970 { 00:22:54.970 "trid": { 00:22:54.970 "trtype": "RDMA", 00:22:54.970 "adrfam": "IPv4", 00:22:54.970 "traddr": "192.168.100.8", 00:22:54.970 "trsvcid": "4421", 00:22:54.970 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:22:54.970 }, 00:22:54.970 "ctrlr_data": { 00:22:54.970 "cntlid": 3, 00:22:54.970 "vendor_id": "0x8086", 00:22:54.970 "model_number": "SPDK bdev Controller", 00:22:54.970 "serial_number": "00000000000000000000", 00:22:54.970 "firmware_revision": "24.09", 00:22:54.970 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:54.970 "oacs": { 00:22:54.970 "security": 0, 00:22:54.970 "format": 0, 00:22:54.970 "firmware": 0, 00:22:54.970 "ns_manage": 0 00:22:54.970 }, 00:22:54.970 "multi_ctrlr": true, 00:22:54.970 "ana_reporting": false 00:22:54.970 }, 00:22:54.970 "vs": { 00:22:54.970 "nvme_version": "1.3" 00:22:54.970 }, 00:22:54.970 "ns_data": { 00:22:54.970 "id": 1, 00:22:54.970 "can_share": true 00:22:54.970 } 00:22:54.970 } 00:22:54.970 ], 00:22:54.970 "mp_policy": "active_passive" 00:22:54.970 } 00:22:54.970 } 00:22:54.970 ] 00:22:54.970 20:32:07 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:54.970 20:32:07 nvmf_rdma.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:54.970 20:32:07 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:54.970 20:32:07 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:54.970 20:32:07 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:54.970 20:32:07 nvmf_rdma.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.ScKkZwvTbl 00:22:54.970 20:32:07 nvmf_rdma.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:22:54.970 20:32:07 nvmf_rdma.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:22:54.970 20:32:07 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:54.970 20:32:07 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:22:54.970 20:32:07 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:22:54.970 20:32:07 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:22:54.970 20:32:07 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:22:54.970 20:32:07 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:54.970 20:32:07 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:22:54.970 rmmod nvme_rdma 00:22:54.970 rmmod nvme_fabrics 00:22:54.970 20:32:07 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:54.970 20:32:07 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:22:54.970 20:32:07 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:22:54.970 20:32:07 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 3138443 ']' 00:22:54.970 20:32:07 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 3138443 00:22:54.970 20:32:07 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@946 -- # '[' -z 3138443 ']' 00:22:54.970 20:32:07 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@950 -- # kill -0 3138443 00:22:54.970 20:32:07 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@951 -- # uname 00:22:54.970 20:32:07 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:54.970 20:32:07 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3138443 00:22:54.970 20:32:07 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:22:54.970 20:32:07 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:22:54.970 20:32:07 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3138443' 00:22:54.970 killing process with pid 3138443 00:22:54.970 20:32:07 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@965 -- # kill 3138443 00:22:54.970 [2024-05-16 20:32:07.838015] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:22:54.970 20:32:07 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@970 -- # wait 3138443 00:22:54.970 [2024-05-16 20:32:07.881411] rdma.c:2885:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:22:55.229 20:32:08 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:55.229 20:32:08 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:22:55.229 00:22:55.229 real 0m7.733s 00:22:55.229 user 0m3.556s 00:22:55.229 sys 0m4.748s 00:22:55.229 20:32:08 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@1122 -- # xtrace_disable 00:22:55.229 20:32:08 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:55.229 ************************************ 00:22:55.230 END TEST nvmf_async_init 00:22:55.230 ************************************ 00:22:55.230 20:32:08 nvmf_rdma -- nvmf/nvmf.sh@93 -- # run_test dma /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=rdma 00:22:55.230 20:32:08 nvmf_rdma -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:22:55.230 20:32:08 nvmf_rdma -- common/autotest_common.sh@1103 -- # xtrace_disable 00:22:55.230 20:32:08 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:22:55.230 ************************************ 00:22:55.230 START TEST dma 00:22:55.230 ************************************ 00:22:55.230 20:32:08 nvmf_rdma.dma -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=rdma 00:22:55.230 * Looking for test storage... 00:22:55.230 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:22:55.230 20:32:08 nvmf_rdma.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:22:55.230 20:32:08 nvmf_rdma.dma -- nvmf/common.sh@7 -- # uname -s 00:22:55.230 20:32:08 nvmf_rdma.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:55.230 20:32:08 nvmf_rdma.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:55.230 20:32:08 nvmf_rdma.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:55.230 20:32:08 nvmf_rdma.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:55.230 20:32:08 nvmf_rdma.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:55.230 20:32:08 nvmf_rdma.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:55.230 20:32:08 nvmf_rdma.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:55.230 20:32:08 nvmf_rdma.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:55.230 20:32:08 nvmf_rdma.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:55.230 20:32:08 nvmf_rdma.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:55.230 20:32:08 nvmf_rdma.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:22:55.230 20:32:08 nvmf_rdma.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:22:55.230 20:32:08 nvmf_rdma.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:55.230 20:32:08 nvmf_rdma.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:55.230 20:32:08 nvmf_rdma.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:55.230 20:32:08 nvmf_rdma.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:55.230 20:32:08 nvmf_rdma.dma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:22:55.230 20:32:08 nvmf_rdma.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:55.230 20:32:08 nvmf_rdma.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:55.230 20:32:08 nvmf_rdma.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:55.489 20:32:08 nvmf_rdma.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:55.489 20:32:08 nvmf_rdma.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:55.489 20:32:08 nvmf_rdma.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:55.489 20:32:08 nvmf_rdma.dma -- paths/export.sh@5 -- # export PATH 00:22:55.489 20:32:08 nvmf_rdma.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:55.489 20:32:08 nvmf_rdma.dma -- nvmf/common.sh@47 -- # : 0 00:22:55.489 20:32:08 nvmf_rdma.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:55.489 20:32:08 nvmf_rdma.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:55.489 20:32:08 nvmf_rdma.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:55.489 20:32:08 nvmf_rdma.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:55.489 20:32:08 nvmf_rdma.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:55.489 20:32:08 nvmf_rdma.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:55.489 20:32:08 nvmf_rdma.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:55.489 20:32:08 nvmf_rdma.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:55.489 20:32:08 nvmf_rdma.dma -- host/dma.sh@12 -- # '[' rdma '!=' rdma ']' 00:22:55.489 20:32:08 nvmf_rdma.dma -- host/dma.sh@16 -- # MALLOC_BDEV_SIZE=256 00:22:55.489 20:32:08 nvmf_rdma.dma -- host/dma.sh@17 -- # MALLOC_BLOCK_SIZE=512 00:22:55.489 20:32:08 nvmf_rdma.dma -- host/dma.sh@18 -- # subsystem=0 00:22:55.489 20:32:08 nvmf_rdma.dma -- host/dma.sh@93 -- # nvmftestinit 00:22:55.489 20:32:08 nvmf_rdma.dma -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:22:55.489 20:32:08 nvmf_rdma.dma -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:55.489 20:32:08 nvmf_rdma.dma -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:55.489 20:32:08 nvmf_rdma.dma -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:55.489 20:32:08 nvmf_rdma.dma -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:55.489 20:32:08 nvmf_rdma.dma -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:55.489 20:32:08 nvmf_rdma.dma -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:55.489 20:32:08 nvmf_rdma.dma -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:55.489 20:32:08 nvmf_rdma.dma -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:55.489 20:32:08 nvmf_rdma.dma -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:55.489 20:32:08 nvmf_rdma.dma -- nvmf/common.sh@285 -- # xtrace_disable 00:22:55.489 20:32:08 nvmf_rdma.dma -- common/autotest_common.sh@10 -- # set +x 00:23:02.059 20:32:14 nvmf_rdma.dma -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:02.059 20:32:14 nvmf_rdma.dma -- nvmf/common.sh@291 -- # pci_devs=() 00:23:02.059 20:32:14 nvmf_rdma.dma -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:02.059 20:32:14 nvmf_rdma.dma -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:02.059 20:32:14 nvmf_rdma.dma -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:02.059 20:32:14 nvmf_rdma.dma -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:02.059 20:32:14 nvmf_rdma.dma -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:02.059 20:32:14 nvmf_rdma.dma -- nvmf/common.sh@295 -- # net_devs=() 00:23:02.059 20:32:14 nvmf_rdma.dma -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:02.059 20:32:14 nvmf_rdma.dma -- nvmf/common.sh@296 -- # e810=() 00:23:02.059 20:32:14 nvmf_rdma.dma -- nvmf/common.sh@296 -- # local -ga e810 00:23:02.059 20:32:14 nvmf_rdma.dma -- nvmf/common.sh@297 -- # x722=() 00:23:02.059 20:32:14 nvmf_rdma.dma -- nvmf/common.sh@297 -- # local -ga x722 00:23:02.059 20:32:14 nvmf_rdma.dma -- nvmf/common.sh@298 -- # mlx=() 00:23:02.059 20:32:14 nvmf_rdma.dma -- nvmf/common.sh@298 -- # local -ga mlx 00:23:02.059 20:32:14 nvmf_rdma.dma -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:02.059 20:32:14 nvmf_rdma.dma -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:02.059 20:32:14 nvmf_rdma.dma -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:02.059 20:32:14 nvmf_rdma.dma -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:02.059 20:32:14 nvmf_rdma.dma -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:02.059 20:32:14 nvmf_rdma.dma -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:02.059 20:32:14 nvmf_rdma.dma -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:02.059 20:32:14 nvmf_rdma.dma -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:02.059 20:32:14 nvmf_rdma.dma -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:02.059 20:32:14 nvmf_rdma.dma -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:02.059 20:32:14 nvmf_rdma.dma -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:02.059 20:32:14 nvmf_rdma.dma -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:02.059 20:32:14 nvmf_rdma.dma -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:23:02.059 20:32:14 nvmf_rdma.dma -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:23:02.059 20:32:14 nvmf_rdma.dma -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:23:02.059 20:32:14 nvmf_rdma.dma -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:23:02.059 20:32:14 nvmf_rdma.dma -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:23:02.059 20:32:14 nvmf_rdma.dma -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:02.059 20:32:14 nvmf_rdma.dma -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:02.059 20:32:14 nvmf_rdma.dma -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:23:02.059 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:23:02.059 20:32:14 nvmf_rdma.dma -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:23:02.059 20:32:14 nvmf_rdma.dma -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:23:02.059 20:32:14 nvmf_rdma.dma -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:23:02.059 20:32:14 nvmf_rdma.dma -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:23:02.059 20:32:14 nvmf_rdma.dma -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:23:02.059 20:32:14 nvmf_rdma.dma -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:23:02.059 20:32:14 nvmf_rdma.dma -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:02.059 20:32:14 nvmf_rdma.dma -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:23:02.059 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:23:02.059 20:32:14 nvmf_rdma.dma -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:23:02.059 20:32:14 nvmf_rdma.dma -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:23:02.059 20:32:14 nvmf_rdma.dma -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:23:02.059 20:32:14 nvmf_rdma.dma -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:23:02.059 20:32:14 nvmf_rdma.dma -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:23:02.059 20:32:14 nvmf_rdma.dma -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:23:02.059 20:32:14 nvmf_rdma.dma -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:02.059 20:32:14 nvmf_rdma.dma -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:23:02.059 20:32:14 nvmf_rdma.dma -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:02.059 20:32:14 nvmf_rdma.dma -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:02.059 20:32:14 nvmf_rdma.dma -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:23:02.059 20:32:14 nvmf_rdma.dma -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:02.059 20:32:14 nvmf_rdma.dma -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:02.059 20:32:14 nvmf_rdma.dma -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:23:02.059 Found net devices under 0000:da:00.0: mlx_0_0 00:23:02.059 20:32:14 nvmf_rdma.dma -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:02.059 20:32:14 nvmf_rdma.dma -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:02.059 20:32:14 nvmf_rdma.dma -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:02.059 20:32:14 nvmf_rdma.dma -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:23:02.059 20:32:14 nvmf_rdma.dma -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:02.059 20:32:14 nvmf_rdma.dma -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:02.059 20:32:14 nvmf_rdma.dma -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:23:02.059 Found net devices under 0000:da:00.1: mlx_0_1 00:23:02.059 20:32:14 nvmf_rdma.dma -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:02.059 20:32:14 nvmf_rdma.dma -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:02.059 20:32:14 nvmf_rdma.dma -- nvmf/common.sh@414 -- # is_hw=yes 00:23:02.059 20:32:14 nvmf_rdma.dma -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:02.059 20:32:14 nvmf_rdma.dma -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:23:02.059 20:32:14 nvmf_rdma.dma -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:23:02.059 20:32:14 nvmf_rdma.dma -- nvmf/common.sh@420 -- # rdma_device_init 00:23:02.059 20:32:14 nvmf_rdma.dma -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:23:02.059 20:32:14 nvmf_rdma.dma -- nvmf/common.sh@58 -- # uname 00:23:02.059 20:32:14 nvmf_rdma.dma -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:23:02.059 20:32:14 nvmf_rdma.dma -- nvmf/common.sh@62 -- # modprobe ib_cm 00:23:02.059 20:32:14 nvmf_rdma.dma -- nvmf/common.sh@63 -- # modprobe ib_core 00:23:02.059 20:32:14 nvmf_rdma.dma -- nvmf/common.sh@64 -- # modprobe ib_umad 00:23:02.059 20:32:14 nvmf_rdma.dma -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:23:02.059 20:32:14 nvmf_rdma.dma -- nvmf/common.sh@66 -- # modprobe iw_cm 00:23:02.059 20:32:14 nvmf_rdma.dma -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:23:02.059 20:32:14 nvmf_rdma.dma -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:23:02.059 20:32:14 nvmf_rdma.dma -- nvmf/common.sh@502 -- # allocate_nic_ips 00:23:02.059 20:32:14 nvmf_rdma.dma -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:23:02.059 20:32:14 nvmf_rdma.dma -- nvmf/common.sh@73 -- # get_rdma_if_list 00:23:02.059 20:32:14 nvmf_rdma.dma -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:23:02.060 20:32:14 nvmf_rdma.dma -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:23:02.060 20:32:14 nvmf_rdma.dma -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:23:02.060 20:32:14 nvmf_rdma.dma -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:23:02.060 20:32:14 nvmf_rdma.dma -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:23:02.060 20:32:14 nvmf_rdma.dma -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:23:02.060 20:32:14 nvmf_rdma.dma -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:02.060 20:32:14 nvmf_rdma.dma -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:23:02.060 20:32:14 nvmf_rdma.dma -- nvmf/common.sh@104 -- # echo mlx_0_0 00:23:02.060 20:32:14 nvmf_rdma.dma -- nvmf/common.sh@105 -- # continue 2 00:23:02.060 20:32:14 nvmf_rdma.dma -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:23:02.060 20:32:14 nvmf_rdma.dma -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:02.060 20:32:14 nvmf_rdma.dma -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:23:02.060 20:32:14 nvmf_rdma.dma -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:02.060 20:32:14 nvmf_rdma.dma -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:23:02.060 20:32:14 nvmf_rdma.dma -- nvmf/common.sh@104 -- # echo mlx_0_1 00:23:02.060 20:32:14 nvmf_rdma.dma -- nvmf/common.sh@105 -- # continue 2 00:23:02.060 20:32:14 nvmf_rdma.dma -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:23:02.060 20:32:14 nvmf_rdma.dma -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:23:02.060 20:32:14 nvmf_rdma.dma -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:23:02.060 20:32:14 nvmf_rdma.dma -- nvmf/common.sh@113 -- # awk '{print $4}' 00:23:02.060 20:32:14 nvmf_rdma.dma -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:23:02.060 20:32:14 nvmf_rdma.dma -- nvmf/common.sh@113 -- # cut -d/ -f1 00:23:02.060 20:32:14 nvmf_rdma.dma -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:23:02.060 20:32:14 nvmf_rdma.dma -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:23:02.060 20:32:14 nvmf_rdma.dma -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:23:02.060 262: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:23:02.060 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:23:02.060 altname enp218s0f0np0 00:23:02.060 altname ens818f0np0 00:23:02.060 inet 192.168.100.8/24 scope global mlx_0_0 00:23:02.060 valid_lft forever preferred_lft forever 00:23:02.060 20:32:14 nvmf_rdma.dma -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:23:02.060 20:32:14 nvmf_rdma.dma -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:23:02.060 20:32:14 nvmf_rdma.dma -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:23:02.060 20:32:14 nvmf_rdma.dma -- nvmf/common.sh@113 -- # awk '{print $4}' 00:23:02.060 20:32:14 nvmf_rdma.dma -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:23:02.060 20:32:14 nvmf_rdma.dma -- nvmf/common.sh@113 -- # cut -d/ -f1 00:23:02.060 20:32:14 nvmf_rdma.dma -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:23:02.060 20:32:14 nvmf_rdma.dma -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:23:02.060 20:32:14 nvmf_rdma.dma -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:23:02.060 263: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:23:02.060 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:23:02.060 altname enp218s0f1np1 00:23:02.060 altname ens818f1np1 00:23:02.060 inet 192.168.100.9/24 scope global mlx_0_1 00:23:02.060 valid_lft forever preferred_lft forever 00:23:02.060 20:32:14 nvmf_rdma.dma -- nvmf/common.sh@422 -- # return 0 00:23:02.060 20:32:14 nvmf_rdma.dma -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:02.060 20:32:14 nvmf_rdma.dma -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:23:02.060 20:32:14 nvmf_rdma.dma -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:23:02.060 20:32:14 nvmf_rdma.dma -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:23:02.060 20:32:14 nvmf_rdma.dma -- nvmf/common.sh@86 -- # get_rdma_if_list 00:23:02.060 20:32:14 nvmf_rdma.dma -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:23:02.060 20:32:14 nvmf_rdma.dma -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:23:02.060 20:32:14 nvmf_rdma.dma -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:23:02.060 20:32:14 nvmf_rdma.dma -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:23:02.060 20:32:14 nvmf_rdma.dma -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:23:02.060 20:32:14 nvmf_rdma.dma -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:23:02.060 20:32:14 nvmf_rdma.dma -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:02.060 20:32:14 nvmf_rdma.dma -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:23:02.060 20:32:14 nvmf_rdma.dma -- nvmf/common.sh@104 -- # echo mlx_0_0 00:23:02.060 20:32:14 nvmf_rdma.dma -- nvmf/common.sh@105 -- # continue 2 00:23:02.060 20:32:14 nvmf_rdma.dma -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:23:02.060 20:32:14 nvmf_rdma.dma -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:02.060 20:32:14 nvmf_rdma.dma -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:23:02.060 20:32:14 nvmf_rdma.dma -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:02.060 20:32:14 nvmf_rdma.dma -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:23:02.060 20:32:14 nvmf_rdma.dma -- nvmf/common.sh@104 -- # echo mlx_0_1 00:23:02.060 20:32:14 nvmf_rdma.dma -- nvmf/common.sh@105 -- # continue 2 00:23:02.060 20:32:14 nvmf_rdma.dma -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:23:02.060 20:32:14 nvmf_rdma.dma -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:23:02.060 20:32:14 nvmf_rdma.dma -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:23:02.060 20:32:14 nvmf_rdma.dma -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:23:02.060 20:32:14 nvmf_rdma.dma -- nvmf/common.sh@113 -- # awk '{print $4}' 00:23:02.060 20:32:14 nvmf_rdma.dma -- nvmf/common.sh@113 -- # cut -d/ -f1 00:23:02.060 20:32:14 nvmf_rdma.dma -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:23:02.060 20:32:14 nvmf_rdma.dma -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:23:02.060 20:32:14 nvmf_rdma.dma -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:23:02.060 20:32:14 nvmf_rdma.dma -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:23:02.060 20:32:14 nvmf_rdma.dma -- nvmf/common.sh@113 -- # awk '{print $4}' 00:23:02.060 20:32:14 nvmf_rdma.dma -- nvmf/common.sh@113 -- # cut -d/ -f1 00:23:02.060 20:32:14 nvmf_rdma.dma -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:23:02.060 192.168.100.9' 00:23:02.060 20:32:14 nvmf_rdma.dma -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:23:02.060 192.168.100.9' 00:23:02.060 20:32:14 nvmf_rdma.dma -- nvmf/common.sh@457 -- # head -n 1 00:23:02.060 20:32:14 nvmf_rdma.dma -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:23:02.060 20:32:14 nvmf_rdma.dma -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:23:02.060 192.168.100.9' 00:23:02.060 20:32:14 nvmf_rdma.dma -- nvmf/common.sh@458 -- # head -n 1 00:23:02.060 20:32:14 nvmf_rdma.dma -- nvmf/common.sh@458 -- # tail -n +2 00:23:02.060 20:32:14 nvmf_rdma.dma -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:23:02.060 20:32:14 nvmf_rdma.dma -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:23:02.060 20:32:14 nvmf_rdma.dma -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:23:02.060 20:32:14 nvmf_rdma.dma -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:23:02.060 20:32:14 nvmf_rdma.dma -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:23:02.060 20:32:14 nvmf_rdma.dma -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:23:02.060 20:32:14 nvmf_rdma.dma -- host/dma.sh@94 -- # nvmfappstart -m 0x3 00:23:02.060 20:32:14 nvmf_rdma.dma -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:02.060 20:32:14 nvmf_rdma.dma -- common/autotest_common.sh@720 -- # xtrace_disable 00:23:02.060 20:32:14 nvmf_rdma.dma -- common/autotest_common.sh@10 -- # set +x 00:23:02.060 20:32:14 nvmf_rdma.dma -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:23:02.060 20:32:14 nvmf_rdma.dma -- nvmf/common.sh@481 -- # nvmfpid=3142052 00:23:02.060 20:32:14 nvmf_rdma.dma -- nvmf/common.sh@482 -- # waitforlisten 3142052 00:23:02.060 20:32:14 nvmf_rdma.dma -- common/autotest_common.sh@827 -- # '[' -z 3142052 ']' 00:23:02.060 20:32:14 nvmf_rdma.dma -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:02.060 20:32:14 nvmf_rdma.dma -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:02.060 20:32:14 nvmf_rdma.dma -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:02.060 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:02.060 20:32:14 nvmf_rdma.dma -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:02.060 20:32:14 nvmf_rdma.dma -- common/autotest_common.sh@10 -- # set +x 00:23:02.060 [2024-05-16 20:32:14.303115] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:23:02.060 [2024-05-16 20:32:14.303160] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:02.060 EAL: No free 2048 kB hugepages reported on node 1 00:23:02.060 [2024-05-16 20:32:14.365240] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:23:02.060 [2024-05-16 20:32:14.447610] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:02.060 [2024-05-16 20:32:14.447647] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:02.060 [2024-05-16 20:32:14.447654] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:02.060 [2024-05-16 20:32:14.447660] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:02.060 [2024-05-16 20:32:14.447665] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:02.060 [2024-05-16 20:32:14.447710] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:02.060 [2024-05-16 20:32:14.447713] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:02.319 20:32:15 nvmf_rdma.dma -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:02.319 20:32:15 nvmf_rdma.dma -- common/autotest_common.sh@860 -- # return 0 00:23:02.319 20:32:15 nvmf_rdma.dma -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:02.319 20:32:15 nvmf_rdma.dma -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:02.319 20:32:15 nvmf_rdma.dma -- common/autotest_common.sh@10 -- # set +x 00:23:02.319 20:32:15 nvmf_rdma.dma -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:02.319 20:32:15 nvmf_rdma.dma -- host/dma.sh@96 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:23:02.319 20:32:15 nvmf_rdma.dma -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:02.319 20:32:15 nvmf_rdma.dma -- common/autotest_common.sh@10 -- # set +x 00:23:02.319 [2024-05-16 20:32:15.162804] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x189a2f0/0x189e7e0) succeed. 00:23:02.319 [2024-05-16 20:32:15.171706] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x189b7f0/0x18dfe70) succeed. 00:23:02.319 20:32:15 nvmf_rdma.dma -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:02.319 20:32:15 nvmf_rdma.dma -- host/dma.sh@97 -- # rpc_cmd bdev_malloc_create 256 512 -b Malloc0 00:23:02.319 20:32:15 nvmf_rdma.dma -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:02.319 20:32:15 nvmf_rdma.dma -- common/autotest_common.sh@10 -- # set +x 00:23:02.319 Malloc0 00:23:02.319 20:32:15 nvmf_rdma.dma -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:02.319 20:32:15 nvmf_rdma.dma -- host/dma.sh@98 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK00000000000001 00:23:02.319 20:32:15 nvmf_rdma.dma -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:02.319 20:32:15 nvmf_rdma.dma -- common/autotest_common.sh@10 -- # set +x 00:23:02.576 20:32:15 nvmf_rdma.dma -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:02.576 20:32:15 nvmf_rdma.dma -- host/dma.sh@99 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Malloc0 00:23:02.576 20:32:15 nvmf_rdma.dma -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:02.576 20:32:15 nvmf_rdma.dma -- common/autotest_common.sh@10 -- # set +x 00:23:02.576 20:32:15 nvmf_rdma.dma -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:02.576 20:32:15 nvmf_rdma.dma -- host/dma.sh@100 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:23:02.576 20:32:15 nvmf_rdma.dma -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:02.576 20:32:15 nvmf_rdma.dma -- common/autotest_common.sh@10 -- # set +x 00:23:02.576 [2024-05-16 20:32:15.327062] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:23:02.576 [2024-05-16 20:32:15.327445] rdma.c:3032:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:23:02.576 20:32:15 nvmf_rdma.dma -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:02.576 20:32:15 nvmf_rdma.dma -- host/dma.sh@104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randrw -M 70 -t 5 -m 0xc --json /dev/fd/62 -b Nvme0n1 -f -x translate 00:23:02.576 20:32:15 nvmf_rdma.dma -- host/dma.sh@104 -- # gen_nvmf_target_json 0 00:23:02.576 20:32:15 nvmf_rdma.dma -- nvmf/common.sh@532 -- # config=() 00:23:02.576 20:32:15 nvmf_rdma.dma -- nvmf/common.sh@532 -- # local subsystem config 00:23:02.576 20:32:15 nvmf_rdma.dma -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:02.576 20:32:15 nvmf_rdma.dma -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:02.576 { 00:23:02.576 "params": { 00:23:02.576 "name": "Nvme$subsystem", 00:23:02.576 "trtype": "$TEST_TRANSPORT", 00:23:02.576 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:02.576 "adrfam": "ipv4", 00:23:02.576 "trsvcid": "$NVMF_PORT", 00:23:02.576 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:02.576 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:02.576 "hdgst": ${hdgst:-false}, 00:23:02.576 "ddgst": ${ddgst:-false} 00:23:02.576 }, 00:23:02.577 "method": "bdev_nvme_attach_controller" 00:23:02.577 } 00:23:02.577 EOF 00:23:02.577 )") 00:23:02.577 20:32:15 nvmf_rdma.dma -- nvmf/common.sh@554 -- # cat 00:23:02.577 20:32:15 nvmf_rdma.dma -- nvmf/common.sh@556 -- # jq . 00:23:02.577 20:32:15 nvmf_rdma.dma -- nvmf/common.sh@557 -- # IFS=, 00:23:02.577 20:32:15 nvmf_rdma.dma -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:23:02.577 "params": { 00:23:02.577 "name": "Nvme0", 00:23:02.577 "trtype": "rdma", 00:23:02.577 "traddr": "192.168.100.8", 00:23:02.577 "adrfam": "ipv4", 00:23:02.577 "trsvcid": "4420", 00:23:02.577 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:02.577 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:23:02.577 "hdgst": false, 00:23:02.577 "ddgst": false 00:23:02.577 }, 00:23:02.577 "method": "bdev_nvme_attach_controller" 00:23:02.577 }' 00:23:02.577 [2024-05-16 20:32:15.371738] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:23:02.577 [2024-05-16 20:32:15.371783] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3142282 ] 00:23:02.577 EAL: No free 2048 kB hugepages reported on node 1 00:23:02.577 [2024-05-16 20:32:15.426309] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:23:02.577 [2024-05-16 20:32:15.500598] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:23:02.577 [2024-05-16 20:32:15.500600] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:09.134 bdev Nvme0n1 reports 1 memory domains 00:23:09.134 bdev Nvme0n1 supports RDMA memory domain 00:23:09.134 Initialization complete, running randrw IO for 5 sec on 2 cores 00:23:09.134 ========================================================================== 00:23:09.134 Latency [us] 00:23:09.134 IOPS MiB/s Average min max 00:23:09.134 Core 2: 21034.39 82.17 759.99 253.72 8601.27 00:23:09.134 Core 3: 21166.77 82.68 755.17 254.18 8693.03 00:23:09.134 ========================================================================== 00:23:09.134 Total : 42201.15 164.85 757.57 253.72 8693.03 00:23:09.134 00:23:09.134 Total operations: 211034, translate 211034 pull_push 0 memzero 0 00:23:09.134 20:32:20 nvmf_rdma.dma -- host/dma.sh@107 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randrw -M 70 -t 5 -m 0xc --json /dev/fd/62 -b Malloc0 -x pull_push 00:23:09.134 20:32:20 nvmf_rdma.dma -- host/dma.sh@107 -- # gen_malloc_json 00:23:09.134 20:32:20 nvmf_rdma.dma -- host/dma.sh@21 -- # jq . 00:23:09.134 [2024-05-16 20:32:20.935378] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:23:09.134 [2024-05-16 20:32:20.935430] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3143203 ] 00:23:09.134 EAL: No free 2048 kB hugepages reported on node 1 00:23:09.134 [2024-05-16 20:32:20.990374] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:23:09.134 [2024-05-16 20:32:21.063735] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:23:09.134 [2024-05-16 20:32:21.063738] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:14.395 bdev Malloc0 reports 2 memory domains 00:23:14.395 bdev Malloc0 doesn't support RDMA memory domain 00:23:14.395 Initialization complete, running randrw IO for 5 sec on 2 cores 00:23:14.395 ========================================================================== 00:23:14.395 Latency [us] 00:23:14.395 IOPS MiB/s Average min max 00:23:14.395 Core 2: 14444.19 56.42 1106.97 402.49 1414.89 00:23:14.395 Core 3: 14479.77 56.56 1104.24 424.04 1738.45 00:23:14.395 ========================================================================== 00:23:14.395 Total : 28923.96 112.98 1105.60 402.49 1738.45 00:23:14.395 00:23:14.395 Total operations: 144674, translate 0 pull_push 578696 memzero 0 00:23:14.395 20:32:26 nvmf_rdma.dma -- host/dma.sh@110 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randread -M 70 -t 5 -m 0xc --json /dev/fd/62 -b lvs0/lvol0 -f -x memzero 00:23:14.395 20:32:26 nvmf_rdma.dma -- host/dma.sh@110 -- # gen_lvol_nvme_json 0 00:23:14.395 20:32:26 nvmf_rdma.dma -- host/dma.sh@48 -- # local subsystem=0 00:23:14.395 20:32:26 nvmf_rdma.dma -- host/dma.sh@50 -- # jq . 00:23:14.395 Ignoring -M option 00:23:14.395 [2024-05-16 20:32:26.416548] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:23:14.395 [2024-05-16 20:32:26.416600] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3144115 ] 00:23:14.395 EAL: No free 2048 kB hugepages reported on node 1 00:23:14.395 [2024-05-16 20:32:26.470161] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:23:14.395 [2024-05-16 20:32:26.540438] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:23:14.395 [2024-05-16 20:32:26.540439] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:19.658 bdev 63833e9b-afc1-4f7c-9031-5b0a98617887 reports 1 memory domains 00:23:19.658 bdev 63833e9b-afc1-4f7c-9031-5b0a98617887 supports RDMA memory domain 00:23:19.658 Initialization complete, running randread IO for 5 sec on 2 cores 00:23:19.658 ========================================================================== 00:23:19.658 Latency [us] 00:23:19.658 IOPS MiB/s Average min max 00:23:19.658 Core 2: 80260.64 313.52 198.61 69.06 2937.30 00:23:19.658 Core 3: 80694.99 315.21 197.53 72.25 2863.82 00:23:19.658 ========================================================================== 00:23:19.658 Total : 160955.63 628.73 198.07 69.06 2937.30 00:23:19.658 00:23:19.658 Total operations: 804876, translate 0 pull_push 0 memzero 804876 00:23:19.658 20:32:31 nvmf_rdma.dma -- host/dma.sh@113 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 16 -o 4096 -w write -t 1 -r 'trtype:rdma adrfam:IPV4 traddr:192.168.100.8 trsvcid:4420' 00:23:19.658 EAL: No free 2048 kB hugepages reported on node 1 00:23:19.658 [2024-05-16 20:32:32.075093] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on RDMA/192.168.100.8/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:23:21.556 Initializing NVMe Controllers 00:23:21.556 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode0 00:23:21.556 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:23:21.556 Initialization complete. Launching workers. 00:23:21.556 ======================================================== 00:23:21.556 Latency(us) 00:23:21.556 Device Information : IOPS MiB/s Average min max 00:23:21.556 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 0: 2016.00 7.88 7980.00 6973.60 8979.04 00:23:21.556 ======================================================== 00:23:21.556 Total : 2016.00 7.88 7980.00 6973.60 8979.04 00:23:21.556 00:23:21.556 20:32:34 nvmf_rdma.dma -- host/dma.sh@116 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randrw -M 70 -t 5 -m 0xc --json /dev/fd/62 -b lvs0/lvol0 -f -x translate 00:23:21.556 20:32:34 nvmf_rdma.dma -- host/dma.sh@116 -- # gen_lvol_nvme_json 0 00:23:21.556 20:32:34 nvmf_rdma.dma -- host/dma.sh@48 -- # local subsystem=0 00:23:21.556 20:32:34 nvmf_rdma.dma -- host/dma.sh@50 -- # jq . 00:23:21.556 [2024-05-16 20:32:34.406554] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:23:21.556 [2024-05-16 20:32:34.406593] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3145400 ] 00:23:21.556 EAL: No free 2048 kB hugepages reported on node 1 00:23:21.556 [2024-05-16 20:32:34.460772] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:23:21.556 [2024-05-16 20:32:34.533931] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:23:21.556 [2024-05-16 20:32:34.533934] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:28.173 bdev f8a3828f-e0f6-4388-8a4d-fb90df6b8109 reports 1 memory domains 00:23:28.173 bdev f8a3828f-e0f6-4388-8a4d-fb90df6b8109 supports RDMA memory domain 00:23:28.173 Initialization complete, running randrw IO for 5 sec on 2 cores 00:23:28.173 ========================================================================== 00:23:28.173 Latency [us] 00:23:28.173 IOPS MiB/s Average min max 00:23:28.173 Core 2: 18705.42 73.07 854.62 48.28 12649.14 00:23:28.173 Core 3: 18970.37 74.10 842.69 19.34 12289.10 00:23:28.173 ========================================================================== 00:23:28.173 Total : 37675.79 147.17 848.61 19.34 12649.14 00:23:28.173 00:23:28.173 Total operations: 188419, translate 188312 pull_push 0 memzero 107 00:23:28.173 20:32:39 nvmf_rdma.dma -- host/dma.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:23:28.173 20:32:39 nvmf_rdma.dma -- host/dma.sh@120 -- # nvmftestfini 00:23:28.173 20:32:39 nvmf_rdma.dma -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:28.173 20:32:39 nvmf_rdma.dma -- nvmf/common.sh@117 -- # sync 00:23:28.173 20:32:39 nvmf_rdma.dma -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:23:28.173 20:32:39 nvmf_rdma.dma -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:23:28.173 20:32:39 nvmf_rdma.dma -- nvmf/common.sh@120 -- # set +e 00:23:28.173 20:32:39 nvmf_rdma.dma -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:28.173 20:32:39 nvmf_rdma.dma -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:23:28.173 rmmod nvme_rdma 00:23:28.173 rmmod nvme_fabrics 00:23:28.173 20:32:40 nvmf_rdma.dma -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:28.173 20:32:40 nvmf_rdma.dma -- nvmf/common.sh@124 -- # set -e 00:23:28.173 20:32:40 nvmf_rdma.dma -- nvmf/common.sh@125 -- # return 0 00:23:28.173 20:32:40 nvmf_rdma.dma -- nvmf/common.sh@489 -- # '[' -n 3142052 ']' 00:23:28.173 20:32:40 nvmf_rdma.dma -- nvmf/common.sh@490 -- # killprocess 3142052 00:23:28.173 20:32:40 nvmf_rdma.dma -- common/autotest_common.sh@946 -- # '[' -z 3142052 ']' 00:23:28.173 20:32:40 nvmf_rdma.dma -- common/autotest_common.sh@950 -- # kill -0 3142052 00:23:28.174 20:32:40 nvmf_rdma.dma -- common/autotest_common.sh@951 -- # uname 00:23:28.174 20:32:40 nvmf_rdma.dma -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:28.174 20:32:40 nvmf_rdma.dma -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3142052 00:23:28.174 20:32:40 nvmf_rdma.dma -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:23:28.174 20:32:40 nvmf_rdma.dma -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:23:28.174 20:32:40 nvmf_rdma.dma -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3142052' 00:23:28.174 killing process with pid 3142052 00:23:28.174 20:32:40 nvmf_rdma.dma -- common/autotest_common.sh@965 -- # kill 3142052 00:23:28.174 [2024-05-16 20:32:40.082333] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:23:28.174 20:32:40 nvmf_rdma.dma -- common/autotest_common.sh@970 -- # wait 3142052 00:23:28.174 [2024-05-16 20:32:40.132002] rdma.c:2885:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:23:28.174 20:32:40 nvmf_rdma.dma -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:28.174 20:32:40 nvmf_rdma.dma -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:23:28.174 00:23:28.174 real 0m32.269s 00:23:28.174 user 1m36.380s 00:23:28.174 sys 0m5.559s 00:23:28.174 20:32:40 nvmf_rdma.dma -- common/autotest_common.sh@1122 -- # xtrace_disable 00:23:28.174 20:32:40 nvmf_rdma.dma -- common/autotest_common.sh@10 -- # set +x 00:23:28.174 ************************************ 00:23:28.174 END TEST dma 00:23:28.174 ************************************ 00:23:28.174 20:32:40 nvmf_rdma -- nvmf/nvmf.sh@96 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=rdma 00:23:28.174 20:32:40 nvmf_rdma -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:23:28.174 20:32:40 nvmf_rdma -- common/autotest_common.sh@1103 -- # xtrace_disable 00:23:28.174 20:32:40 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:23:28.174 ************************************ 00:23:28.174 START TEST nvmf_identify 00:23:28.174 ************************************ 00:23:28.174 20:32:40 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=rdma 00:23:28.174 * Looking for test storage... 00:23:28.174 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:23:28.174 20:32:40 nvmf_rdma.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:23:28.174 20:32:40 nvmf_rdma.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:23:28.174 20:32:40 nvmf_rdma.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:28.174 20:32:40 nvmf_rdma.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:28.174 20:32:40 nvmf_rdma.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:28.174 20:32:40 nvmf_rdma.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:28.174 20:32:40 nvmf_rdma.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:28.174 20:32:40 nvmf_rdma.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:28.174 20:32:40 nvmf_rdma.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:28.174 20:32:40 nvmf_rdma.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:28.174 20:32:40 nvmf_rdma.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:28.174 20:32:40 nvmf_rdma.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:28.174 20:32:40 nvmf_rdma.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:23:28.174 20:32:40 nvmf_rdma.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:23:28.174 20:32:40 nvmf_rdma.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:28.174 20:32:40 nvmf_rdma.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:28.174 20:32:40 nvmf_rdma.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:28.174 20:32:40 nvmf_rdma.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:28.174 20:32:40 nvmf_rdma.nvmf_identify -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:23:28.174 20:32:40 nvmf_rdma.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:28.174 20:32:40 nvmf_rdma.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:28.174 20:32:40 nvmf_rdma.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:28.174 20:32:40 nvmf_rdma.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:28.174 20:32:40 nvmf_rdma.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:28.174 20:32:40 nvmf_rdma.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:28.174 20:32:40 nvmf_rdma.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:23:28.174 20:32:40 nvmf_rdma.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:28.174 20:32:40 nvmf_rdma.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:23:28.174 20:32:40 nvmf_rdma.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:28.174 20:32:40 nvmf_rdma.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:28.174 20:32:40 nvmf_rdma.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:28.174 20:32:40 nvmf_rdma.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:28.174 20:32:40 nvmf_rdma.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:28.174 20:32:40 nvmf_rdma.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:28.174 20:32:40 nvmf_rdma.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:28.174 20:32:40 nvmf_rdma.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:28.174 20:32:40 nvmf_rdma.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:28.174 20:32:40 nvmf_rdma.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:28.174 20:32:40 nvmf_rdma.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:23:28.174 20:32:40 nvmf_rdma.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:23:28.174 20:32:40 nvmf_rdma.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:28.174 20:32:40 nvmf_rdma.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:28.174 20:32:40 nvmf_rdma.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:28.174 20:32:40 nvmf_rdma.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:28.174 20:32:40 nvmf_rdma.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:28.174 20:32:40 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:28.174 20:32:40 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:28.174 20:32:40 nvmf_rdma.nvmf_identify -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:28.174 20:32:40 nvmf_rdma.nvmf_identify -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:28.174 20:32:40 nvmf_rdma.nvmf_identify -- nvmf/common.sh@285 -- # xtrace_disable 00:23:28.174 20:32:40 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:34.735 20:32:46 nvmf_rdma.nvmf_identify -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:34.735 20:32:46 nvmf_rdma.nvmf_identify -- nvmf/common.sh@291 -- # pci_devs=() 00:23:34.735 20:32:46 nvmf_rdma.nvmf_identify -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:34.735 20:32:46 nvmf_rdma.nvmf_identify -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:34.735 20:32:46 nvmf_rdma.nvmf_identify -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:34.735 20:32:46 nvmf_rdma.nvmf_identify -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:34.735 20:32:46 nvmf_rdma.nvmf_identify -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:34.735 20:32:46 nvmf_rdma.nvmf_identify -- nvmf/common.sh@295 -- # net_devs=() 00:23:34.735 20:32:46 nvmf_rdma.nvmf_identify -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:34.735 20:32:46 nvmf_rdma.nvmf_identify -- nvmf/common.sh@296 -- # e810=() 00:23:34.735 20:32:46 nvmf_rdma.nvmf_identify -- nvmf/common.sh@296 -- # local -ga e810 00:23:34.735 20:32:46 nvmf_rdma.nvmf_identify -- nvmf/common.sh@297 -- # x722=() 00:23:34.735 20:32:46 nvmf_rdma.nvmf_identify -- nvmf/common.sh@297 -- # local -ga x722 00:23:34.735 20:32:46 nvmf_rdma.nvmf_identify -- nvmf/common.sh@298 -- # mlx=() 00:23:34.735 20:32:46 nvmf_rdma.nvmf_identify -- nvmf/common.sh@298 -- # local -ga mlx 00:23:34.735 20:32:46 nvmf_rdma.nvmf_identify -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:34.735 20:32:46 nvmf_rdma.nvmf_identify -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:34.735 20:32:46 nvmf_rdma.nvmf_identify -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:34.735 20:32:46 nvmf_rdma.nvmf_identify -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:34.735 20:32:46 nvmf_rdma.nvmf_identify -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:34.735 20:32:46 nvmf_rdma.nvmf_identify -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:34.735 20:32:46 nvmf_rdma.nvmf_identify -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:34.735 20:32:46 nvmf_rdma.nvmf_identify -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:34.735 20:32:46 nvmf_rdma.nvmf_identify -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:34.735 20:32:46 nvmf_rdma.nvmf_identify -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:34.735 20:32:46 nvmf_rdma.nvmf_identify -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:34.735 20:32:46 nvmf_rdma.nvmf_identify -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:34.735 20:32:46 nvmf_rdma.nvmf_identify -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:23:34.735 20:32:46 nvmf_rdma.nvmf_identify -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:23:34.735 20:32:46 nvmf_rdma.nvmf_identify -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:23:34.735 20:32:46 nvmf_rdma.nvmf_identify -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:23:34.735 20:32:46 nvmf_rdma.nvmf_identify -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:23:34.735 20:32:46 nvmf_rdma.nvmf_identify -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:34.735 20:32:46 nvmf_rdma.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:34.735 20:32:46 nvmf_rdma.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:23:34.735 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:23:34.735 20:32:46 nvmf_rdma.nvmf_identify -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:23:34.735 20:32:46 nvmf_rdma.nvmf_identify -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:23:34.735 20:32:46 nvmf_rdma.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:23:34.735 20:32:46 nvmf_rdma.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:23:34.735 20:32:46 nvmf_rdma.nvmf_identify -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:23:34.735 20:32:46 nvmf_rdma.nvmf_identify -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:23:34.735 20:32:46 nvmf_rdma.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:34.735 20:32:46 nvmf_rdma.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:23:34.735 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:23:34.735 20:32:46 nvmf_rdma.nvmf_identify -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:23:34.735 20:32:46 nvmf_rdma.nvmf_identify -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:23:34.735 20:32:46 nvmf_rdma.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:23:34.735 20:32:46 nvmf_rdma.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:23:34.735 20:32:46 nvmf_rdma.nvmf_identify -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:23:34.735 20:32:46 nvmf_rdma.nvmf_identify -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:23:34.735 20:32:46 nvmf_rdma.nvmf_identify -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:34.735 20:32:46 nvmf_rdma.nvmf_identify -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:23:34.735 20:32:46 nvmf_rdma.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:34.735 20:32:46 nvmf_rdma.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:34.735 20:32:46 nvmf_rdma.nvmf_identify -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:23:34.735 20:32:46 nvmf_rdma.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:34.735 20:32:46 nvmf_rdma.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:34.735 20:32:46 nvmf_rdma.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:23:34.735 Found net devices under 0000:da:00.0: mlx_0_0 00:23:34.735 20:32:46 nvmf_rdma.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:34.735 20:32:46 nvmf_rdma.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:34.735 20:32:46 nvmf_rdma.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:34.735 20:32:46 nvmf_rdma.nvmf_identify -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:23:34.735 20:32:46 nvmf_rdma.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:34.735 20:32:46 nvmf_rdma.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:34.735 20:32:46 nvmf_rdma.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:23:34.735 Found net devices under 0000:da:00.1: mlx_0_1 00:23:34.735 20:32:46 nvmf_rdma.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:34.735 20:32:46 nvmf_rdma.nvmf_identify -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:34.735 20:32:46 nvmf_rdma.nvmf_identify -- nvmf/common.sh@414 -- # is_hw=yes 00:23:34.735 20:32:46 nvmf_rdma.nvmf_identify -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:34.735 20:32:46 nvmf_rdma.nvmf_identify -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:23:34.735 20:32:46 nvmf_rdma.nvmf_identify -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:23:34.735 20:32:46 nvmf_rdma.nvmf_identify -- nvmf/common.sh@420 -- # rdma_device_init 00:23:34.735 20:32:46 nvmf_rdma.nvmf_identify -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:23:34.735 20:32:46 nvmf_rdma.nvmf_identify -- nvmf/common.sh@58 -- # uname 00:23:34.735 20:32:46 nvmf_rdma.nvmf_identify -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:23:34.735 20:32:46 nvmf_rdma.nvmf_identify -- nvmf/common.sh@62 -- # modprobe ib_cm 00:23:34.735 20:32:46 nvmf_rdma.nvmf_identify -- nvmf/common.sh@63 -- # modprobe ib_core 00:23:34.735 20:32:46 nvmf_rdma.nvmf_identify -- nvmf/common.sh@64 -- # modprobe ib_umad 00:23:34.735 20:32:46 nvmf_rdma.nvmf_identify -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:23:34.735 20:32:46 nvmf_rdma.nvmf_identify -- nvmf/common.sh@66 -- # modprobe iw_cm 00:23:34.735 20:32:46 nvmf_rdma.nvmf_identify -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:23:34.735 20:32:46 nvmf_rdma.nvmf_identify -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:23:34.735 20:32:46 nvmf_rdma.nvmf_identify -- nvmf/common.sh@502 -- # allocate_nic_ips 00:23:34.735 20:32:46 nvmf_rdma.nvmf_identify -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:23:34.735 20:32:46 nvmf_rdma.nvmf_identify -- nvmf/common.sh@73 -- # get_rdma_if_list 00:23:34.735 20:32:46 nvmf_rdma.nvmf_identify -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:23:34.735 20:32:46 nvmf_rdma.nvmf_identify -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:23:34.735 20:32:46 nvmf_rdma.nvmf_identify -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:23:34.735 20:32:46 nvmf_rdma.nvmf_identify -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:23:34.735 20:32:46 nvmf_rdma.nvmf_identify -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:23:34.735 20:32:46 nvmf_rdma.nvmf_identify -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:23:34.735 20:32:46 nvmf_rdma.nvmf_identify -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:34.735 20:32:46 nvmf_rdma.nvmf_identify -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:23:34.735 20:32:46 nvmf_rdma.nvmf_identify -- nvmf/common.sh@104 -- # echo mlx_0_0 00:23:34.735 20:32:46 nvmf_rdma.nvmf_identify -- nvmf/common.sh@105 -- # continue 2 00:23:34.735 20:32:46 nvmf_rdma.nvmf_identify -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:23:34.735 20:32:46 nvmf_rdma.nvmf_identify -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:34.735 20:32:46 nvmf_rdma.nvmf_identify -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:23:34.735 20:32:46 nvmf_rdma.nvmf_identify -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:34.735 20:32:46 nvmf_rdma.nvmf_identify -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:23:34.735 20:32:46 nvmf_rdma.nvmf_identify -- nvmf/common.sh@104 -- # echo mlx_0_1 00:23:34.735 20:32:46 nvmf_rdma.nvmf_identify -- nvmf/common.sh@105 -- # continue 2 00:23:34.735 20:32:46 nvmf_rdma.nvmf_identify -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:23:34.735 20:32:46 nvmf_rdma.nvmf_identify -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:23:34.735 20:32:46 nvmf_rdma.nvmf_identify -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:23:34.735 20:32:46 nvmf_rdma.nvmf_identify -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:23:34.735 20:32:46 nvmf_rdma.nvmf_identify -- nvmf/common.sh@113 -- # awk '{print $4}' 00:23:34.735 20:32:46 nvmf_rdma.nvmf_identify -- nvmf/common.sh@113 -- # cut -d/ -f1 00:23:34.735 20:32:46 nvmf_rdma.nvmf_identify -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:23:34.735 20:32:46 nvmf_rdma.nvmf_identify -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:23:34.735 20:32:46 nvmf_rdma.nvmf_identify -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:23:34.735 262: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:23:34.735 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:23:34.736 altname enp218s0f0np0 00:23:34.736 altname ens818f0np0 00:23:34.736 inet 192.168.100.8/24 scope global mlx_0_0 00:23:34.736 valid_lft forever preferred_lft forever 00:23:34.736 20:32:46 nvmf_rdma.nvmf_identify -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:23:34.736 20:32:46 nvmf_rdma.nvmf_identify -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:23:34.736 20:32:46 nvmf_rdma.nvmf_identify -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:23:34.736 20:32:46 nvmf_rdma.nvmf_identify -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:23:34.736 20:32:46 nvmf_rdma.nvmf_identify -- nvmf/common.sh@113 -- # awk '{print $4}' 00:23:34.736 20:32:46 nvmf_rdma.nvmf_identify -- nvmf/common.sh@113 -- # cut -d/ -f1 00:23:34.736 20:32:46 nvmf_rdma.nvmf_identify -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:23:34.736 20:32:46 nvmf_rdma.nvmf_identify -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:23:34.736 20:32:46 nvmf_rdma.nvmf_identify -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:23:34.736 263: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:23:34.736 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:23:34.736 altname enp218s0f1np1 00:23:34.736 altname ens818f1np1 00:23:34.736 inet 192.168.100.9/24 scope global mlx_0_1 00:23:34.736 valid_lft forever preferred_lft forever 00:23:34.736 20:32:46 nvmf_rdma.nvmf_identify -- nvmf/common.sh@422 -- # return 0 00:23:34.736 20:32:46 nvmf_rdma.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:34.736 20:32:46 nvmf_rdma.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:23:34.736 20:32:46 nvmf_rdma.nvmf_identify -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:23:34.736 20:32:46 nvmf_rdma.nvmf_identify -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:23:34.736 20:32:46 nvmf_rdma.nvmf_identify -- nvmf/common.sh@86 -- # get_rdma_if_list 00:23:34.736 20:32:46 nvmf_rdma.nvmf_identify -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:23:34.736 20:32:46 nvmf_rdma.nvmf_identify -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:23:34.736 20:32:46 nvmf_rdma.nvmf_identify -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:23:34.736 20:32:46 nvmf_rdma.nvmf_identify -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:23:34.736 20:32:46 nvmf_rdma.nvmf_identify -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:23:34.736 20:32:46 nvmf_rdma.nvmf_identify -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:23:34.736 20:32:46 nvmf_rdma.nvmf_identify -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:34.736 20:32:46 nvmf_rdma.nvmf_identify -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:23:34.736 20:32:46 nvmf_rdma.nvmf_identify -- nvmf/common.sh@104 -- # echo mlx_0_0 00:23:34.736 20:32:46 nvmf_rdma.nvmf_identify -- nvmf/common.sh@105 -- # continue 2 00:23:34.736 20:32:46 nvmf_rdma.nvmf_identify -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:23:34.736 20:32:46 nvmf_rdma.nvmf_identify -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:34.736 20:32:46 nvmf_rdma.nvmf_identify -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:23:34.736 20:32:46 nvmf_rdma.nvmf_identify -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:34.736 20:32:46 nvmf_rdma.nvmf_identify -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:23:34.736 20:32:46 nvmf_rdma.nvmf_identify -- nvmf/common.sh@104 -- # echo mlx_0_1 00:23:34.736 20:32:46 nvmf_rdma.nvmf_identify -- nvmf/common.sh@105 -- # continue 2 00:23:34.736 20:32:46 nvmf_rdma.nvmf_identify -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:23:34.736 20:32:46 nvmf_rdma.nvmf_identify -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:23:34.736 20:32:46 nvmf_rdma.nvmf_identify -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:23:34.736 20:32:46 nvmf_rdma.nvmf_identify -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:23:34.736 20:32:46 nvmf_rdma.nvmf_identify -- nvmf/common.sh@113 -- # awk '{print $4}' 00:23:34.736 20:32:46 nvmf_rdma.nvmf_identify -- nvmf/common.sh@113 -- # cut -d/ -f1 00:23:34.736 20:32:46 nvmf_rdma.nvmf_identify -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:23:34.736 20:32:46 nvmf_rdma.nvmf_identify -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:23:34.736 20:32:46 nvmf_rdma.nvmf_identify -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:23:34.736 20:32:46 nvmf_rdma.nvmf_identify -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:23:34.736 20:32:46 nvmf_rdma.nvmf_identify -- nvmf/common.sh@113 -- # awk '{print $4}' 00:23:34.736 20:32:46 nvmf_rdma.nvmf_identify -- nvmf/common.sh@113 -- # cut -d/ -f1 00:23:34.736 20:32:46 nvmf_rdma.nvmf_identify -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:23:34.736 192.168.100.9' 00:23:34.736 20:32:46 nvmf_rdma.nvmf_identify -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:23:34.736 192.168.100.9' 00:23:34.736 20:32:46 nvmf_rdma.nvmf_identify -- nvmf/common.sh@457 -- # head -n 1 00:23:34.736 20:32:46 nvmf_rdma.nvmf_identify -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:23:34.736 20:32:46 nvmf_rdma.nvmf_identify -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:23:34.736 192.168.100.9' 00:23:34.736 20:32:46 nvmf_rdma.nvmf_identify -- nvmf/common.sh@458 -- # tail -n +2 00:23:34.736 20:32:46 nvmf_rdma.nvmf_identify -- nvmf/common.sh@458 -- # head -n 1 00:23:34.736 20:32:46 nvmf_rdma.nvmf_identify -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:23:34.736 20:32:46 nvmf_rdma.nvmf_identify -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:23:34.736 20:32:46 nvmf_rdma.nvmf_identify -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:23:34.736 20:32:46 nvmf_rdma.nvmf_identify -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:23:34.736 20:32:46 nvmf_rdma.nvmf_identify -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:23:34.736 20:32:46 nvmf_rdma.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:23:34.736 20:32:46 nvmf_rdma.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:23:34.736 20:32:46 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@720 -- # xtrace_disable 00:23:34.736 20:32:46 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:34.736 20:32:46 nvmf_rdma.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=3149758 00:23:34.736 20:32:46 nvmf_rdma.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:34.736 20:32:46 nvmf_rdma.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 3149758 00:23:34.736 20:32:46 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@827 -- # '[' -z 3149758 ']' 00:23:34.736 20:32:46 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:34.736 20:32:46 nvmf_rdma.nvmf_identify -- host/identify.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:34.736 20:32:46 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:34.736 20:32:46 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:34.736 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:34.736 20:32:46 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:34.736 20:32:46 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:34.736 [2024-05-16 20:32:46.864896] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:23:34.736 [2024-05-16 20:32:46.864945] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:34.736 EAL: No free 2048 kB hugepages reported on node 1 00:23:34.736 [2024-05-16 20:32:46.926055] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:34.736 [2024-05-16 20:32:47.013207] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:34.736 [2024-05-16 20:32:47.013242] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:34.736 [2024-05-16 20:32:47.013249] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:34.736 [2024-05-16 20:32:47.013255] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:34.736 [2024-05-16 20:32:47.013260] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:34.736 [2024-05-16 20:32:47.013310] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:34.736 [2024-05-16 20:32:47.013325] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:34.736 [2024-05-16 20:32:47.013417] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:23:34.736 [2024-05-16 20:32:47.013418] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:34.736 20:32:47 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:34.736 20:32:47 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@860 -- # return 0 00:23:34.736 20:32:47 nvmf_rdma.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:23:34.736 20:32:47 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:34.736 20:32:47 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:34.736 [2024-05-16 20:32:47.702404] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x16e09b0/0x16e4ea0) succeed. 00:23:34.736 [2024-05-16 20:32:47.712505] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x16e1ff0/0x1726530) succeed. 00:23:34.995 20:32:47 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:34.995 20:32:47 nvmf_rdma.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:23:34.995 20:32:47 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:34.995 20:32:47 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:34.995 20:32:47 nvmf_rdma.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:34.995 20:32:47 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:34.995 20:32:47 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:34.995 Malloc0 00:23:34.995 20:32:47 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:34.995 20:32:47 nvmf_rdma.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:34.995 20:32:47 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:34.995 20:32:47 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:34.995 20:32:47 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:34.995 20:32:47 nvmf_rdma.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:23:34.995 20:32:47 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:34.995 20:32:47 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:34.995 20:32:47 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:34.995 20:32:47 nvmf_rdma.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:23:34.995 20:32:47 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:34.995 20:32:47 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:34.995 [2024-05-16 20:32:47.909473] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:23:34.995 [2024-05-16 20:32:47.909861] rdma.c:3032:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:23:34.995 20:32:47 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:34.995 20:32:47 nvmf_rdma.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:23:34.995 20:32:47 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:34.996 20:32:47 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:34.996 20:32:47 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:34.996 20:32:47 nvmf_rdma.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:23:34.996 20:32:47 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:34.996 20:32:47 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:34.996 [ 00:23:34.996 { 00:23:34.996 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:34.996 "subtype": "Discovery", 00:23:34.996 "listen_addresses": [ 00:23:34.996 { 00:23:34.996 "trtype": "RDMA", 00:23:34.996 "adrfam": "IPv4", 00:23:34.996 "traddr": "192.168.100.8", 00:23:34.996 "trsvcid": "4420" 00:23:34.996 } 00:23:34.996 ], 00:23:34.996 "allow_any_host": true, 00:23:34.996 "hosts": [] 00:23:34.996 }, 00:23:34.996 { 00:23:34.996 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:34.996 "subtype": "NVMe", 00:23:34.996 "listen_addresses": [ 00:23:34.996 { 00:23:34.996 "trtype": "RDMA", 00:23:34.996 "adrfam": "IPv4", 00:23:34.996 "traddr": "192.168.100.8", 00:23:34.996 "trsvcid": "4420" 00:23:34.996 } 00:23:34.996 ], 00:23:34.996 "allow_any_host": true, 00:23:34.996 "hosts": [], 00:23:34.996 "serial_number": "SPDK00000000000001", 00:23:34.996 "model_number": "SPDK bdev Controller", 00:23:34.996 "max_namespaces": 32, 00:23:34.996 "min_cntlid": 1, 00:23:34.996 "max_cntlid": 65519, 00:23:34.996 "namespaces": [ 00:23:34.996 { 00:23:34.996 "nsid": 1, 00:23:34.996 "bdev_name": "Malloc0", 00:23:34.996 "name": "Malloc0", 00:23:34.996 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:23:34.996 "eui64": "ABCDEF0123456789", 00:23:34.996 "uuid": "f2940c9d-313c-46b6-96e0-0d83e0fe146e" 00:23:34.996 } 00:23:34.996 ] 00:23:34.996 } 00:23:34.996 ] 00:23:34.996 20:32:47 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:34.996 20:32:47 nvmf_rdma.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:23:34.996 [2024-05-16 20:32:47.960859] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:23:34.996 [2024-05-16 20:32:47.960902] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3150006 ] 00:23:34.996 EAL: No free 2048 kB hugepages reported on node 1 00:23:35.264 [2024-05-16 20:32:48.005740] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:23:35.264 [2024-05-16 20:32:48.005815] nvme_rdma.c:2261:nvme_rdma_ctrlr_construct: *DEBUG*: successfully initialized the nvmf ctrlr 00:23:35.264 [2024-05-16 20:32:48.005834] nvme_rdma.c:1291:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: adrfam 1 ai_family 2 00:23:35.264 [2024-05-16 20:32:48.005838] nvme_rdma.c:1295:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: trsvcid is 4420 00:23:35.264 [2024-05-16 20:32:48.005863] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:23:35.264 [2024-05-16 20:32:48.021976] nvme_rdma.c: 510:nvme_rdma_qpair_process_cm_event: *DEBUG*: Requested queue depth 32. Target receive queue depth 32. 00:23:35.264 [2024-05-16 20:32:48.036717] nvme_rdma.c:1180:nvme_rdma_connect_established: *DEBUG*: rc =0 00:23:35.264 [2024-05-16 20:32:48.036728] nvme_rdma.c:1185:nvme_rdma_connect_established: *DEBUG*: RDMA requests created 00:23:35.264 [2024-05-16 20:32:48.036734] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf640 length 0x10 lkey 0x182e00 00:23:35.264 [2024-05-16 20:32:48.036739] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf668 length 0x10 lkey 0x182e00 00:23:35.264 [2024-05-16 20:32:48.036744] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf690 length 0x10 lkey 0x182e00 00:23:35.264 [2024-05-16 20:32:48.036748] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6b8 length 0x10 lkey 0x182e00 00:23:35.264 [2024-05-16 20:32:48.036752] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6e0 length 0x10 lkey 0x182e00 00:23:35.264 [2024-05-16 20:32:48.036756] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf708 length 0x10 lkey 0x182e00 00:23:35.264 [2024-05-16 20:32:48.036761] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf730 length 0x10 lkey 0x182e00 00:23:35.264 [2024-05-16 20:32:48.036765] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf758 length 0x10 lkey 0x182e00 00:23:35.264 [2024-05-16 20:32:48.036769] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf780 length 0x10 lkey 0x182e00 00:23:35.264 [2024-05-16 20:32:48.036773] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7a8 length 0x10 lkey 0x182e00 00:23:35.264 [2024-05-16 20:32:48.036778] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7d0 length 0x10 lkey 0x182e00 00:23:35.264 [2024-05-16 20:32:48.036782] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7f8 length 0x10 lkey 0x182e00 00:23:35.264 [2024-05-16 20:32:48.036786] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf820 length 0x10 lkey 0x182e00 00:23:35.264 [2024-05-16 20:32:48.036790] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf848 length 0x10 lkey 0x182e00 00:23:35.264 [2024-05-16 20:32:48.036794] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf870 length 0x10 lkey 0x182e00 00:23:35.264 [2024-05-16 20:32:48.036798] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf898 length 0x10 lkey 0x182e00 00:23:35.264 [2024-05-16 20:32:48.036802] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf8c0 length 0x10 lkey 0x182e00 00:23:35.264 [2024-05-16 20:32:48.036806] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf8e8 length 0x10 lkey 0x182e00 00:23:35.264 [2024-05-16 20:32:48.036811] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf910 length 0x10 lkey 0x182e00 00:23:35.264 [2024-05-16 20:32:48.036815] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf938 length 0x10 lkey 0x182e00 00:23:35.264 [2024-05-16 20:32:48.036819] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf960 length 0x10 lkey 0x182e00 00:23:35.264 [2024-05-16 20:32:48.036824] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf988 length 0x10 lkey 0x182e00 00:23:35.264 [2024-05-16 20:32:48.036828] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf9b0 length 0x10 lkey 0x182e00 00:23:35.264 [2024-05-16 20:32:48.036832] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf9d8 length 0x10 lkey 0x182e00 00:23:35.264 [2024-05-16 20:32:48.036837] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa00 length 0x10 lkey 0x182e00 00:23:35.264 [2024-05-16 20:32:48.036840] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa28 length 0x10 lkey 0x182e00 00:23:35.264 [2024-05-16 20:32:48.036847] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa50 length 0x10 lkey 0x182e00 00:23:35.265 [2024-05-16 20:32:48.036852] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa78 length 0x10 lkey 0x182e00 00:23:35.265 [2024-05-16 20:32:48.036856] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfaa0 length 0x10 lkey 0x182e00 00:23:35.265 [2024-05-16 20:32:48.036860] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfac8 length 0x10 lkey 0x182e00 00:23:35.265 [2024-05-16 20:32:48.036864] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfaf0 length 0x10 lkey 0x182e00 00:23:35.265 [2024-05-16 20:32:48.036868] nvme_rdma.c:1199:nvme_rdma_connect_established: *DEBUG*: RDMA responses created 00:23:35.265 [2024-05-16 20:32:48.036872] nvme_rdma.c:1202:nvme_rdma_connect_established: *DEBUG*: rc =0 00:23:35.265 [2024-05-16 20:32:48.036875] nvme_rdma.c:1207:nvme_rdma_connect_established: *DEBUG*: RDMA responses submitted 00:23:35.265 [2024-05-16 20:32:48.036889] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x182e00 00:23:35.265 [2024-05-16 20:32:48.036900] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf1c0 len:0x400 key:0x182e00 00:23:35.265 [2024-05-16 20:32:48.042427] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:35.265 [2024-05-16 20:32:48.042435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:23:35.265 [2024-05-16 20:32:48.042441] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf640 length 0x10 lkey 0x182e00 00:23:35.265 [2024-05-16 20:32:48.042448] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:23:35.265 [2024-05-16 20:32:48.042454] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:23:35.265 [2024-05-16 20:32:48.042459] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:23:35.265 [2024-05-16 20:32:48.042469] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x182e00 00:23:35.265 [2024-05-16 20:32:48.042476] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:35.265 [2024-05-16 20:32:48.042501] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:35.265 [2024-05-16 20:32:48.042505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:10300 sqhd:0002 p:0 m:0 dnr:0 00:23:35.265 [2024-05-16 20:32:48.042510] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:23:35.265 [2024-05-16 20:32:48.042514] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf668 length 0x10 lkey 0x182e00 00:23:35.265 [2024-05-16 20:32:48.042518] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:23:35.265 [2024-05-16 20:32:48.042524] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x182e00 00:23:35.265 [2024-05-16 20:32:48.042530] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:35.265 [2024-05-16 20:32:48.042549] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:35.265 [2024-05-16 20:32:48.042553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1e01007f sqhd:0003 p:0 m:0 dnr:0 00:23:35.265 [2024-05-16 20:32:48.042558] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:23:35.265 [2024-05-16 20:32:48.042561] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf690 length 0x10 lkey 0x182e00 00:23:35.265 [2024-05-16 20:32:48.042567] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:23:35.265 [2024-05-16 20:32:48.042577] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x182e00 00:23:35.265 [2024-05-16 20:32:48.042583] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:35.265 [2024-05-16 20:32:48.042600] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:35.265 [2024-05-16 20:32:48.042605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:23:35.265 [2024-05-16 20:32:48.042610] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:23:35.265 [2024-05-16 20:32:48.042614] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6b8 length 0x10 lkey 0x182e00 00:23:35.265 [2024-05-16 20:32:48.042620] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x182e00 00:23:35.265 [2024-05-16 20:32:48.042626] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:35.265 [2024-05-16 20:32:48.042646] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:35.265 [2024-05-16 20:32:48.042650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:23:35.265 [2024-05-16 20:32:48.042655] nvme_ctrlr.c:3750:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:23:35.265 [2024-05-16 20:32:48.042659] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:23:35.265 [2024-05-16 20:32:48.042663] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6e0 length 0x10 lkey 0x182e00 00:23:35.265 [2024-05-16 20:32:48.042668] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:23:35.265 [2024-05-16 20:32:48.042772] nvme_ctrlr.c:3943:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:23:35.265 [2024-05-16 20:32:48.042777] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:23:35.265 [2024-05-16 20:32:48.042784] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x182e00 00:23:35.265 [2024-05-16 20:32:48.042790] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:35.265 [2024-05-16 20:32:48.042809] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:35.265 [2024-05-16 20:32:48.042813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:23:35.265 [2024-05-16 20:32:48.042818] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:23:35.265 [2024-05-16 20:32:48.042821] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf708 length 0x10 lkey 0x182e00 00:23:35.265 [2024-05-16 20:32:48.042828] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x182e00 00:23:35.265 [2024-05-16 20:32:48.042834] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:35.265 [2024-05-16 20:32:48.042850] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:35.265 [2024-05-16 20:32:48.042854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:23:35.265 [2024-05-16 20:32:48.042859] nvme_ctrlr.c:3785:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:23:35.265 [2024-05-16 20:32:48.042864] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:23:35.265 [2024-05-16 20:32:48.042868] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf730 length 0x10 lkey 0x182e00 00:23:35.265 [2024-05-16 20:32:48.042873] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:23:35.265 [2024-05-16 20:32:48.042883] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:23:35.265 [2024-05-16 20:32:48.042891] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x182e00 00:23:35.265 [2024-05-16 20:32:48.042898] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x182e00 00:23:35.265 [2024-05-16 20:32:48.042929] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:35.265 [2024-05-16 20:32:48.042933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:23:35.265 [2024-05-16 20:32:48.042940] nvme_ctrlr.c:1985:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:23:35.265 [2024-05-16 20:32:48.042944] nvme_ctrlr.c:1989:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:23:35.265 [2024-05-16 20:32:48.042948] nvme_ctrlr.c:1992:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:23:35.265 [2024-05-16 20:32:48.042952] nvme_ctrlr.c:2016:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:23:35.265 [2024-05-16 20:32:48.042956] nvme_ctrlr.c:2031:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:23:35.265 [2024-05-16 20:32:48.042960] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:23:35.265 [2024-05-16 20:32:48.042964] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf758 length 0x10 lkey 0x182e00 00:23:35.265 [2024-05-16 20:32:48.042972] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:23:35.265 [2024-05-16 20:32:48.042980] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x182e00 00:23:35.265 [2024-05-16 20:32:48.042986] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:35.265 [2024-05-16 20:32:48.043013] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:35.265 [2024-05-16 20:32:48.043018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:23:35.265 [2024-05-16 20:32:48.043024] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d04c0 length 0x40 lkey 0x182e00 00:23:35.265 [2024-05-16 20:32:48.043029] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:35.265 [2024-05-16 20:32:48.043034] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0600 length 0x40 lkey 0x182e00 00:23:35.265 [2024-05-16 20:32:48.043039] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:35.266 [2024-05-16 20:32:48.043044] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182e00 00:23:35.266 [2024-05-16 20:32:48.043050] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:35.266 [2024-05-16 20:32:48.043055] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0880 length 0x40 lkey 0x182e00 00:23:35.266 [2024-05-16 20:32:48.043061] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:23:35.266 [2024-05-16 20:32:48.043066] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:23:35.266 [2024-05-16 20:32:48.043069] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf780 length 0x10 lkey 0x182e00 00:23:35.266 [2024-05-16 20:32:48.043078] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:23:35.266 [2024-05-16 20:32:48.043084] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x182e00 00:23:35.266 [2024-05-16 20:32:48.043090] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:0 cdw10:0000000f SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:35.266 [2024-05-16 20:32:48.043106] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:35.266 [2024-05-16 20:32:48.043110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:2710 sqhd:000a p:0 m:0 dnr:0 00:23:35.266 [2024-05-16 20:32:48.043115] nvme_ctrlr.c:2903:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:23:35.266 [2024-05-16 20:32:48.043119] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:23:35.266 [2024-05-16 20:32:48.043123] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7a8 length 0x10 lkey 0x182e00 00:23:35.266 [2024-05-16 20:32:48.043130] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x182e00 00:23:35.266 [2024-05-16 20:32:48.043136] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x182e00 00:23:35.266 [2024-05-16 20:32:48.043161] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:35.266 [2024-05-16 20:32:48.043166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:23:35.266 [2024-05-16 20:32:48.043171] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7d0 length 0x10 lkey 0x182e00 00:23:35.266 [2024-05-16 20:32:48.043179] nvme_ctrlr.c:4037:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:23:35.266 [2024-05-16 20:32:48.043199] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x182e00 00:23:35.266 [2024-05-16 20:32:48.043206] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:0 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf000 len:0x400 key:0x182e00 00:23:35.266 [2024-05-16 20:32:48.043211] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d09c0 length 0x40 lkey 0x182e00 00:23:35.266 [2024-05-16 20:32:48.043217] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:23:35.266 [2024-05-16 20:32:48.043238] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:35.266 [2024-05-16 20:32:48.043242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:23:35.266 [2024-05-16 20:32:48.043250] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0b00 length 0x40 lkey 0x182e00 00:23:35.266 [2024-05-16 20:32:48.043256] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0xc00 key:0x182e00 00:23:35.266 [2024-05-16 20:32:48.043260] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7f8 length 0x10 lkey 0x182e00 00:23:35.266 [2024-05-16 20:32:48.043267] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:35.266 [2024-05-16 20:32:48.043271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:23:35.266 [2024-05-16 20:32:48.043275] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf820 length 0x10 lkey 0x182e00 00:23:35.266 [2024-05-16 20:32:48.043294] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:35.266 [2024-05-16 20:32:48.043298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:6 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:23:35.266 [2024-05-16 20:32:48.043306] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d09c0 length 0x40 lkey 0x182e00 00:23:35.266 [2024-05-16 20:32:48.043312] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00010070 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf000 len:0x8 key:0x182e00 00:23:35.266 [2024-05-16 20:32:48.043316] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf848 length 0x10 lkey 0x182e00 00:23:35.266 [2024-05-16 20:32:48.043340] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:35.266 [2024-05-16 20:32:48.043345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:23:35.266 [2024-05-16 20:32:48.043352] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf870 length 0x10 lkey 0x182e00 00:23:35.266 ===================================================== 00:23:35.266 NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2014-08.org.nvmexpress.discovery 00:23:35.266 ===================================================== 00:23:35.266 Controller Capabilities/Features 00:23:35.266 ================================ 00:23:35.266 Vendor ID: 0000 00:23:35.266 Subsystem Vendor ID: 0000 00:23:35.266 Serial Number: .................... 00:23:35.266 Model Number: ........................................ 00:23:35.266 Firmware Version: 24.09 00:23:35.266 Recommended Arb Burst: 0 00:23:35.266 IEEE OUI Identifier: 00 00 00 00:23:35.266 Multi-path I/O 00:23:35.266 May have multiple subsystem ports: No 00:23:35.266 May have multiple controllers: No 00:23:35.266 Associated with SR-IOV VF: No 00:23:35.266 Max Data Transfer Size: 131072 00:23:35.266 Max Number of Namespaces: 0 00:23:35.266 Max Number of I/O Queues: 1024 00:23:35.266 NVMe Specification Version (VS): 1.3 00:23:35.266 NVMe Specification Version (Identify): 1.3 00:23:35.266 Maximum Queue Entries: 128 00:23:35.266 Contiguous Queues Required: Yes 00:23:35.266 Arbitration Mechanisms Supported 00:23:35.266 Weighted Round Robin: Not Supported 00:23:35.266 Vendor Specific: Not Supported 00:23:35.266 Reset Timeout: 15000 ms 00:23:35.266 Doorbell Stride: 4 bytes 00:23:35.266 NVM Subsystem Reset: Not Supported 00:23:35.266 Command Sets Supported 00:23:35.266 NVM Command Set: Supported 00:23:35.266 Boot Partition: Not Supported 00:23:35.266 Memory Page Size Minimum: 4096 bytes 00:23:35.266 Memory Page Size Maximum: 4096 bytes 00:23:35.266 Persistent Memory Region: Not Supported 00:23:35.266 Optional Asynchronous Events Supported 00:23:35.266 Namespace Attribute Notices: Not Supported 00:23:35.266 Firmware Activation Notices: Not Supported 00:23:35.266 ANA Change Notices: Not Supported 00:23:35.266 PLE Aggregate Log Change Notices: Not Supported 00:23:35.266 LBA Status Info Alert Notices: Not Supported 00:23:35.266 EGE Aggregate Log Change Notices: Not Supported 00:23:35.266 Normal NVM Subsystem Shutdown event: Not Supported 00:23:35.266 Zone Descriptor Change Notices: Not Supported 00:23:35.266 Discovery Log Change Notices: Supported 00:23:35.266 Controller Attributes 00:23:35.266 128-bit Host Identifier: Not Supported 00:23:35.266 Non-Operational Permissive Mode: Not Supported 00:23:35.266 NVM Sets: Not Supported 00:23:35.266 Read Recovery Levels: Not Supported 00:23:35.266 Endurance Groups: Not Supported 00:23:35.266 Predictable Latency Mode: Not Supported 00:23:35.266 Traffic Based Keep ALive: Not Supported 00:23:35.266 Namespace Granularity: Not Supported 00:23:35.266 SQ Associations: Not Supported 00:23:35.266 UUID List: Not Supported 00:23:35.266 Multi-Domain Subsystem: Not Supported 00:23:35.266 Fixed Capacity Management: Not Supported 00:23:35.266 Variable Capacity Management: Not Supported 00:23:35.266 Delete Endurance Group: Not Supported 00:23:35.266 Delete NVM Set: Not Supported 00:23:35.266 Extended LBA Formats Supported: Not Supported 00:23:35.266 Flexible Data Placement Supported: Not Supported 00:23:35.266 00:23:35.266 Controller Memory Buffer Support 00:23:35.266 ================================ 00:23:35.266 Supported: No 00:23:35.266 00:23:35.266 Persistent Memory Region Support 00:23:35.266 ================================ 00:23:35.266 Supported: No 00:23:35.266 00:23:35.266 Admin Command Set Attributes 00:23:35.266 ============================ 00:23:35.266 Security Send/Receive: Not Supported 00:23:35.266 Format NVM: Not Supported 00:23:35.266 Firmware Activate/Download: Not Supported 00:23:35.266 Namespace Management: Not Supported 00:23:35.266 Device Self-Test: Not Supported 00:23:35.266 Directives: Not Supported 00:23:35.266 NVMe-MI: Not Supported 00:23:35.266 Virtualization Management: Not Supported 00:23:35.266 Doorbell Buffer Config: Not Supported 00:23:35.266 Get LBA Status Capability: Not Supported 00:23:35.266 Command & Feature Lockdown Capability: Not Supported 00:23:35.266 Abort Command Limit: 1 00:23:35.266 Async Event Request Limit: 4 00:23:35.266 Number of Firmware Slots: N/A 00:23:35.267 Firmware Slot 1 Read-Only: N/A 00:23:35.267 Firmware Activation Without Reset: N/A 00:23:35.267 Multiple Update Detection Support: N/A 00:23:35.267 Firmware Update Granularity: No Information Provided 00:23:35.267 Per-Namespace SMART Log: No 00:23:35.267 Asymmetric Namespace Access Log Page: Not Supported 00:23:35.267 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:23:35.267 Command Effects Log Page: Not Supported 00:23:35.267 Get Log Page Extended Data: Supported 00:23:35.267 Telemetry Log Pages: Not Supported 00:23:35.267 Persistent Event Log Pages: Not Supported 00:23:35.267 Supported Log Pages Log Page: May Support 00:23:35.267 Commands Supported & Effects Log Page: Not Supported 00:23:35.267 Feature Identifiers & Effects Log Page:May Support 00:23:35.267 NVMe-MI Commands & Effects Log Page: May Support 00:23:35.267 Data Area 4 for Telemetry Log: Not Supported 00:23:35.267 Error Log Page Entries Supported: 128 00:23:35.267 Keep Alive: Not Supported 00:23:35.267 00:23:35.267 NVM Command Set Attributes 00:23:35.267 ========================== 00:23:35.267 Submission Queue Entry Size 00:23:35.267 Max: 1 00:23:35.267 Min: 1 00:23:35.267 Completion Queue Entry Size 00:23:35.267 Max: 1 00:23:35.267 Min: 1 00:23:35.267 Number of Namespaces: 0 00:23:35.267 Compare Command: Not Supported 00:23:35.267 Write Uncorrectable Command: Not Supported 00:23:35.267 Dataset Management Command: Not Supported 00:23:35.267 Write Zeroes Command: Not Supported 00:23:35.267 Set Features Save Field: Not Supported 00:23:35.267 Reservations: Not Supported 00:23:35.267 Timestamp: Not Supported 00:23:35.267 Copy: Not Supported 00:23:35.267 Volatile Write Cache: Not Present 00:23:35.267 Atomic Write Unit (Normal): 1 00:23:35.267 Atomic Write Unit (PFail): 1 00:23:35.267 Atomic Compare & Write Unit: 1 00:23:35.267 Fused Compare & Write: Supported 00:23:35.267 Scatter-Gather List 00:23:35.267 SGL Command Set: Supported 00:23:35.267 SGL Keyed: Supported 00:23:35.267 SGL Bit Bucket Descriptor: Not Supported 00:23:35.267 SGL Metadata Pointer: Not Supported 00:23:35.267 Oversized SGL: Not Supported 00:23:35.267 SGL Metadata Address: Not Supported 00:23:35.267 SGL Offset: Supported 00:23:35.267 Transport SGL Data Block: Not Supported 00:23:35.267 Replay Protected Memory Block: Not Supported 00:23:35.267 00:23:35.267 Firmware Slot Information 00:23:35.267 ========================= 00:23:35.267 Active slot: 0 00:23:35.267 00:23:35.267 00:23:35.267 Error Log 00:23:35.267 ========= 00:23:35.267 00:23:35.267 Active Namespaces 00:23:35.267 ================= 00:23:35.267 Discovery Log Page 00:23:35.267 ================== 00:23:35.267 Generation Counter: 2 00:23:35.267 Number of Records: 2 00:23:35.267 Record Format: 0 00:23:35.267 00:23:35.267 Discovery Log Entry 0 00:23:35.267 ---------------------- 00:23:35.267 Transport Type: 1 (RDMA) 00:23:35.267 Address Family: 1 (IPv4) 00:23:35.267 Subsystem Type: 3 (Current Discovery Subsystem) 00:23:35.267 Entry Flags: 00:23:35.267 Duplicate Returned Information: 1 00:23:35.267 Explicit Persistent Connection Support for Discovery: 1 00:23:35.267 Transport Requirements: 00:23:35.267 Secure Channel: Not Required 00:23:35.267 Port ID: 0 (0x0000) 00:23:35.267 Controller ID: 65535 (0xffff) 00:23:35.267 Admin Max SQ Size: 128 00:23:35.267 Transport Service Identifier: 4420 00:23:35.267 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:23:35.267 Transport Address: 192.168.100.8 00:23:35.267 Transport Specific Address Subtype - RDMA 00:23:35.267 RDMA QP Service Type: 1 (Reliable Connected) 00:23:35.267 RDMA Provider Type: 1 (No provider specified) 00:23:35.267 RDMA CM Service: 1 (RDMA_CM) 00:23:35.267 Discovery Log Entry 1 00:23:35.267 ---------------------- 00:23:35.267 Transport Type: 1 (RDMA) 00:23:35.267 Address Family: 1 (IPv4) 00:23:35.267 Subsystem Type: 2 (NVM Subsystem) 00:23:35.267 Entry Flags: 00:23:35.267 Duplicate Returned Information: 0 00:23:35.267 Explicit Persistent Connection Support for Discovery: 0 00:23:35.267 Transport Requirements: 00:23:35.267 Secure Channel: Not Required 00:23:35.267 Port ID: 0 (0x0000) 00:23:35.267 Controller ID: 65535 (0xffff) 00:23:35.267 Admin Max SQ Size: [2024-05-16 20:32:48.043419] nvme_ctrlr.c:4222:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:23:35.267 [2024-05-16 20:32:48.043432] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 10555 doesn't match qid 00:23:35.267 [2024-05-16 20:32:48.043444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32681 cdw0:5 sqhd:3310 p:0 m:0 dnr:0 00:23:35.267 [2024-05-16 20:32:48.043449] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 10555 doesn't match qid 00:23:35.267 [2024-05-16 20:32:48.043455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32681 cdw0:5 sqhd:3310 p:0 m:0 dnr:0 00:23:35.267 [2024-05-16 20:32:48.043460] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 10555 doesn't match qid 00:23:35.267 [2024-05-16 20:32:48.043465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32681 cdw0:5 sqhd:3310 p:0 m:0 dnr:0 00:23:35.267 [2024-05-16 20:32:48.043470] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 10555 doesn't match qid 00:23:35.267 [2024-05-16 20:32:48.043475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32681 cdw0:5 sqhd:3310 p:0 m:0 dnr:0 00:23:35.267 [2024-05-16 20:32:48.043482] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0880 length 0x40 lkey 0x182e00 00:23:35.267 [2024-05-16 20:32:48.043488] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:4 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:35.267 [2024-05-16 20:32:48.043505] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:35.267 [2024-05-16 20:32:48.043509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:4 cdw0:460001 sqhd:0010 p:0 m:0 dnr:0 00:23:35.267 [2024-05-16 20:32:48.043515] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182e00 00:23:35.267 [2024-05-16 20:32:48.043521] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:35.267 [2024-05-16 20:32:48.043525] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf898 length 0x10 lkey 0x182e00 00:23:35.267 [2024-05-16 20:32:48.043548] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:35.267 [2024-05-16 20:32:48.043553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:23:35.267 [2024-05-16 20:32:48.043557] nvme_ctrlr.c:1083:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:23:35.267 [2024-05-16 20:32:48.043563] nvme_ctrlr.c:1086:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:23:35.267 [2024-05-16 20:32:48.043567] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8c0 length 0x10 lkey 0x182e00 00:23:35.267 [2024-05-16 20:32:48.043574] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182e00 00:23:35.267 [2024-05-16 20:32:48.043580] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:35.267 [2024-05-16 20:32:48.043600] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:35.267 [2024-05-16 20:32:48.043605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0012 p:0 m:0 dnr:0 00:23:35.267 [2024-05-16 20:32:48.043609] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8e8 length 0x10 lkey 0x182e00 00:23:35.267 [2024-05-16 20:32:48.043616] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182e00 00:23:35.267 [2024-05-16 20:32:48.043623] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:35.267 [2024-05-16 20:32:48.043640] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:35.267 [2024-05-16 20:32:48.043644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0013 p:0 m:0 dnr:0 00:23:35.267 [2024-05-16 20:32:48.043648] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf910 length 0x10 lkey 0x182e00 00:23:35.267 [2024-05-16 20:32:48.043655] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182e00 00:23:35.267 [2024-05-16 20:32:48.043663] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:35.267 [2024-05-16 20:32:48.043681] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:35.267 [2024-05-16 20:32:48.043686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0014 p:0 m:0 dnr:0 00:23:35.267 [2024-05-16 20:32:48.043690] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf938 length 0x10 lkey 0x182e00 00:23:35.267 [2024-05-16 20:32:48.043697] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182e00 00:23:35.267 [2024-05-16 20:32:48.043703] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:35.268 [2024-05-16 20:32:48.043720] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:35.268 [2024-05-16 20:32:48.043724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0015 p:0 m:0 dnr:0 00:23:35.268 [2024-05-16 20:32:48.043728] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf960 length 0x10 lkey 0x182e00 00:23:35.268 [2024-05-16 20:32:48.043735] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182e00 00:23:35.268 [2024-05-16 20:32:48.043741] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:35.268 [2024-05-16 20:32:48.043755] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:35.268 [2024-05-16 20:32:48.043759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0016 p:0 m:0 dnr:0 00:23:35.268 [2024-05-16 20:32:48.043763] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf988 length 0x10 lkey 0x182e00 00:23:35.268 [2024-05-16 20:32:48.043770] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182e00 00:23:35.268 [2024-05-16 20:32:48.043777] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:35.268 [2024-05-16 20:32:48.043800] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:35.268 [2024-05-16 20:32:48.043804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0017 p:0 m:0 dnr:0 00:23:35.268 [2024-05-16 20:32:48.043809] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9b0 length 0x10 lkey 0x182e00 00:23:35.268 [2024-05-16 20:32:48.043816] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182e00 00:23:35.268 [2024-05-16 20:32:48.043822] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:35.268 [2024-05-16 20:32:48.043840] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:35.268 [2024-05-16 20:32:48.043845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0018 p:0 m:0 dnr:0 00:23:35.268 [2024-05-16 20:32:48.043849] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9d8 length 0x10 lkey 0x182e00 00:23:35.268 [2024-05-16 20:32:48.043856] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182e00 00:23:35.268 [2024-05-16 20:32:48.043862] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:35.268 [2024-05-16 20:32:48.043880] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:35.268 [2024-05-16 20:32:48.043885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0019 p:0 m:0 dnr:0 00:23:35.268 [2024-05-16 20:32:48.043889] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa00 length 0x10 lkey 0x182e00 00:23:35.268 [2024-05-16 20:32:48.043896] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182e00 00:23:35.268 [2024-05-16 20:32:48.043902] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:35.268 [2024-05-16 20:32:48.043920] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:35.268 [2024-05-16 20:32:48.043924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001a p:0 m:0 dnr:0 00:23:35.268 [2024-05-16 20:32:48.043929] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa28 length 0x10 lkey 0x182e00 00:23:35.268 [2024-05-16 20:32:48.043936] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182e00 00:23:35.268 [2024-05-16 20:32:48.043942] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:35.268 [2024-05-16 20:32:48.043963] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:35.268 [2024-05-16 20:32:48.043967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001b p:0 m:0 dnr:0 00:23:35.268 [2024-05-16 20:32:48.043971] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa50 length 0x10 lkey 0x182e00 00:23:35.268 [2024-05-16 20:32:48.043978] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182e00 00:23:35.268 [2024-05-16 20:32:48.043984] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:35.268 [2024-05-16 20:32:48.044000] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:35.268 [2024-05-16 20:32:48.044004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001c p:0 m:0 dnr:0 00:23:35.268 [2024-05-16 20:32:48.044009] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa78 length 0x10 lkey 0x182e00 00:23:35.268 [2024-05-16 20:32:48.044015] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182e00 00:23:35.268 [2024-05-16 20:32:48.044022] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:35.268 [2024-05-16 20:32:48.044039] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:35.268 [2024-05-16 20:32:48.044043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001d p:0 m:0 dnr:0 00:23:35.268 [2024-05-16 20:32:48.044048] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaa0 length 0x10 lkey 0x182e00 00:23:35.268 [2024-05-16 20:32:48.044054] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182e00 00:23:35.268 [2024-05-16 20:32:48.044060] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:35.268 [2024-05-16 20:32:48.044081] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:35.268 [2024-05-16 20:32:48.044085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001e p:0 m:0 dnr:0 00:23:35.268 [2024-05-16 20:32:48.044090] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfac8 length 0x10 lkey 0x182e00 00:23:35.268 [2024-05-16 20:32:48.044096] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182e00 00:23:35.268 [2024-05-16 20:32:48.044102] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:35.268 [2024-05-16 20:32:48.044122] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:35.268 [2024-05-16 20:32:48.044126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001f p:0 m:0 dnr:0 00:23:35.268 [2024-05-16 20:32:48.044130] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaf0 length 0x10 lkey 0x182e00 00:23:35.268 [2024-05-16 20:32:48.044137] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182e00 00:23:35.268 [2024-05-16 20:32:48.044143] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:35.268 [2024-05-16 20:32:48.044160] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:35.268 [2024-05-16 20:32:48.044165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:23:35.268 [2024-05-16 20:32:48.044169] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf640 length 0x10 lkey 0x182e00 00:23:35.268 [2024-05-16 20:32:48.044175] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182e00 00:23:35.268 [2024-05-16 20:32:48.044182] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:35.268 [2024-05-16 20:32:48.044200] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:35.268 [2024-05-16 20:32:48.044204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:23:35.268 [2024-05-16 20:32:48.044208] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf668 length 0x10 lkey 0x182e00 00:23:35.268 [2024-05-16 20:32:48.044215] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182e00 00:23:35.268 [2024-05-16 20:32:48.044221] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:35.268 [2024-05-16 20:32:48.044237] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:35.268 [2024-05-16 20:32:48.044241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0002 p:0 m:0 dnr:0 00:23:35.268 [2024-05-16 20:32:48.044246] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf690 length 0x10 lkey 0x182e00 00:23:35.268 [2024-05-16 20:32:48.044252] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182e00 00:23:35.268 [2024-05-16 20:32:48.044260] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:35.268 [2024-05-16 20:32:48.044275] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:35.268 [2024-05-16 20:32:48.044279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0003 p:0 m:0 dnr:0 00:23:35.268 [2024-05-16 20:32:48.044283] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6b8 length 0x10 lkey 0x182e00 00:23:35.268 [2024-05-16 20:32:48.044290] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182e00 00:23:35.268 [2024-05-16 20:32:48.044296] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:35.268 [2024-05-16 20:32:48.044315] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:35.268 [2024-05-16 20:32:48.044320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0004 p:0 m:0 dnr:0 00:23:35.268 [2024-05-16 20:32:48.044324] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6e0 length 0x10 lkey 0x182e00 00:23:35.268 [2024-05-16 20:32:48.044331] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182e00 00:23:35.268 [2024-05-16 20:32:48.044337] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:35.268 [2024-05-16 20:32:48.044355] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:35.268 [2024-05-16 20:32:48.044359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0005 p:0 m:0 dnr:0 00:23:35.268 [2024-05-16 20:32:48.044363] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf708 length 0x10 lkey 0x182e00 00:23:35.268 [2024-05-16 20:32:48.044370] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182e00 00:23:35.268 [2024-05-16 20:32:48.044376] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:35.268 [2024-05-16 20:32:48.044398] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:35.269 [2024-05-16 20:32:48.044402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0006 p:0 m:0 dnr:0 00:23:35.269 [2024-05-16 20:32:48.044406] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf730 length 0x10 lkey 0x182e00 00:23:35.269 [2024-05-16 20:32:48.044413] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182e00 00:23:35.269 [2024-05-16 20:32:48.044419] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:35.269 [2024-05-16 20:32:48.044441] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:35.269 [2024-05-16 20:32:48.044446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:23:35.269 [2024-05-16 20:32:48.044450] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf758 length 0x10 lkey 0x182e00 00:23:35.269 [2024-05-16 20:32:48.044456] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182e00 00:23:35.269 [2024-05-16 20:32:48.044462] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:35.269 [2024-05-16 20:32:48.044477] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:35.269 [2024-05-16 20:32:48.044481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0008 p:0 m:0 dnr:0 00:23:35.269 [2024-05-16 20:32:48.044486] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf780 length 0x10 lkey 0x182e00 00:23:35.269 [2024-05-16 20:32:48.044492] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182e00 00:23:35.269 [2024-05-16 20:32:48.044500] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:35.269 [2024-05-16 20:32:48.044518] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:35.269 [2024-05-16 20:32:48.044522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0009 p:0 m:0 dnr:0 00:23:35.269 [2024-05-16 20:32:48.044527] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7a8 length 0x10 lkey 0x182e00 00:23:35.269 [2024-05-16 20:32:48.044533] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182e00 00:23:35.269 [2024-05-16 20:32:48.044539] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:35.269 [2024-05-16 20:32:48.044555] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:35.269 [2024-05-16 20:32:48.044560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000a p:0 m:0 dnr:0 00:23:35.269 [2024-05-16 20:32:48.044564] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7d0 length 0x10 lkey 0x182e00 00:23:35.269 [2024-05-16 20:32:48.044571] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182e00 00:23:35.269 [2024-05-16 20:32:48.044577] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:35.269 [2024-05-16 20:32:48.044593] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:35.269 [2024-05-16 20:32:48.044597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000b p:0 m:0 dnr:0 00:23:35.269 [2024-05-16 20:32:48.044601] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7f8 length 0x10 lkey 0x182e00 00:23:35.269 [2024-05-16 20:32:48.044608] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182e00 00:23:35.269 [2024-05-16 20:32:48.044615] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:35.269 [2024-05-16 20:32:48.044638] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:35.269 [2024-05-16 20:32:48.044642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000c p:0 m:0 dnr:0 00:23:35.269 [2024-05-16 20:32:48.044646] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf820 length 0x10 lkey 0x182e00 00:23:35.269 [2024-05-16 20:32:48.044653] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182e00 00:23:35.269 [2024-05-16 20:32:48.044659] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:35.269 [2024-05-16 20:32:48.044678] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:35.269 [2024-05-16 20:32:48.044682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000d p:0 m:0 dnr:0 00:23:35.269 [2024-05-16 20:32:48.044687] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf848 length 0x10 lkey 0x182e00 00:23:35.269 [2024-05-16 20:32:48.044694] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182e00 00:23:35.269 [2024-05-16 20:32:48.044699] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:35.269 [2024-05-16 20:32:48.044719] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:35.269 [2024-05-16 20:32:48.044723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000e p:0 m:0 dnr:0 00:23:35.269 [2024-05-16 20:32:48.044727] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf870 length 0x10 lkey 0x182e00 00:23:35.269 [2024-05-16 20:32:48.044735] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182e00 00:23:35.269 [2024-05-16 20:32:48.044741] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:35.269 [2024-05-16 20:32:48.044762] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:35.269 [2024-05-16 20:32:48.044766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000f p:0 m:0 dnr:0 00:23:35.269 [2024-05-16 20:32:48.044771] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf898 length 0x10 lkey 0x182e00 00:23:35.269 [2024-05-16 20:32:48.044777] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182e00 00:23:35.269 [2024-05-16 20:32:48.044783] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:35.269 [2024-05-16 20:32:48.044804] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:35.269 [2024-05-16 20:32:48.044808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0010 p:0 m:0 dnr:0 00:23:35.269 [2024-05-16 20:32:48.044813] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8c0 length 0x10 lkey 0x182e00 00:23:35.269 [2024-05-16 20:32:48.044819] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182e00 00:23:35.269 [2024-05-16 20:32:48.044825] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:35.269 [2024-05-16 20:32:48.044840] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:35.269 [2024-05-16 20:32:48.044844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0011 p:0 m:0 dnr:0 00:23:35.269 [2024-05-16 20:32:48.044848] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8e8 length 0x10 lkey 0x182e00 00:23:35.269 [2024-05-16 20:32:48.044855] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182e00 00:23:35.269 [2024-05-16 20:32:48.044861] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:35.269 [2024-05-16 20:32:48.044880] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:35.269 [2024-05-16 20:32:48.044884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0012 p:0 m:0 dnr:0 00:23:35.269 [2024-05-16 20:32:48.044888] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf910 length 0x10 lkey 0x182e00 00:23:35.269 [2024-05-16 20:32:48.044895] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182e00 00:23:35.269 [2024-05-16 20:32:48.044901] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:35.269 [2024-05-16 20:32:48.044922] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:35.269 [2024-05-16 20:32:48.044926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0013 p:0 m:0 dnr:0 00:23:35.269 [2024-05-16 20:32:48.044930] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf938 length 0x10 lkey 0x182e00 00:23:35.269 [2024-05-16 20:32:48.044937] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182e00 00:23:35.269 [2024-05-16 20:32:48.044943] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:35.269 [2024-05-16 20:32:48.044964] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:35.270 [2024-05-16 20:32:48.044968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0014 p:0 m:0 dnr:0 00:23:35.270 [2024-05-16 20:32:48.044973] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf960 length 0x10 lkey 0x182e00 00:23:35.270 [2024-05-16 20:32:48.044981] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182e00 00:23:35.270 [2024-05-16 20:32:48.044987] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:35.270 [2024-05-16 20:32:48.045007] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:35.270 [2024-05-16 20:32:48.045012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0015 p:0 m:0 dnr:0 00:23:35.270 [2024-05-16 20:32:48.045016] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf988 length 0x10 lkey 0x182e00 00:23:35.270 [2024-05-16 20:32:48.045023] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182e00 00:23:35.270 [2024-05-16 20:32:48.045029] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:35.270 [2024-05-16 20:32:48.045047] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:35.270 [2024-05-16 20:32:48.045051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0016 p:0 m:0 dnr:0 00:23:35.270 [2024-05-16 20:32:48.045056] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9b0 length 0x10 lkey 0x182e00 00:23:35.270 [2024-05-16 20:32:48.045062] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182e00 00:23:35.270 [2024-05-16 20:32:48.045068] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:35.270 [2024-05-16 20:32:48.045084] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:35.270 [2024-05-16 20:32:48.045089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0017 p:0 m:0 dnr:0 00:23:35.270 [2024-05-16 20:32:48.045093] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9d8 length 0x10 lkey 0x182e00 00:23:35.270 [2024-05-16 20:32:48.045100] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182e00 00:23:35.270 [2024-05-16 20:32:48.045106] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:35.270 [2024-05-16 20:32:48.045122] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:35.270 [2024-05-16 20:32:48.045126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0018 p:0 m:0 dnr:0 00:23:35.270 [2024-05-16 20:32:48.045130] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa00 length 0x10 lkey 0x182e00 00:23:35.270 [2024-05-16 20:32:48.045137] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182e00 00:23:35.270 [2024-05-16 20:32:48.045143] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:35.270 [2024-05-16 20:32:48.045164] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:35.270 [2024-05-16 20:32:48.045168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0019 p:0 m:0 dnr:0 00:23:35.270 [2024-05-16 20:32:48.045172] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa28 length 0x10 lkey 0x182e00 00:23:35.270 [2024-05-16 20:32:48.045179] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182e00 00:23:35.270 [2024-05-16 20:32:48.045185] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:35.270 [2024-05-16 20:32:48.045208] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:35.270 [2024-05-16 20:32:48.045213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001a p:0 m:0 dnr:0 00:23:35.270 [2024-05-16 20:32:48.045218] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa50 length 0x10 lkey 0x182e00 00:23:35.270 [2024-05-16 20:32:48.045225] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182e00 00:23:35.270 [2024-05-16 20:32:48.045232] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:35.270 [2024-05-16 20:32:48.045249] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:35.270 [2024-05-16 20:32:48.045253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001b p:0 m:0 dnr:0 00:23:35.270 [2024-05-16 20:32:48.045258] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa78 length 0x10 lkey 0x182e00 00:23:35.270 [2024-05-16 20:32:48.045264] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182e00 00:23:35.270 [2024-05-16 20:32:48.045270] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:35.270 [2024-05-16 20:32:48.045287] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:35.270 [2024-05-16 20:32:48.045291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001c p:0 m:0 dnr:0 00:23:35.270 [2024-05-16 20:32:48.045295] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaa0 length 0x10 lkey 0x182e00 00:23:35.270 [2024-05-16 20:32:48.045302] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182e00 00:23:35.270 [2024-05-16 20:32:48.045308] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:35.270 [2024-05-16 20:32:48.045325] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:35.270 [2024-05-16 20:32:48.045329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001d p:0 m:0 dnr:0 00:23:35.270 [2024-05-16 20:32:48.045333] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfac8 length 0x10 lkey 0x182e00 00:23:35.270 [2024-05-16 20:32:48.045340] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182e00 00:23:35.270 [2024-05-16 20:32:48.045346] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:35.270 [2024-05-16 20:32:48.045367] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:35.270 [2024-05-16 20:32:48.045371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001e p:0 m:0 dnr:0 00:23:35.270 [2024-05-16 20:32:48.045375] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaf0 length 0x10 lkey 0x182e00 00:23:35.270 [2024-05-16 20:32:48.045382] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182e00 00:23:35.270 [2024-05-16 20:32:48.045388] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:35.270 [2024-05-16 20:32:48.045407] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:35.270 [2024-05-16 20:32:48.045411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001f p:0 m:0 dnr:0 00:23:35.270 [2024-05-16 20:32:48.045415] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf640 length 0x10 lkey 0x182e00 00:23:35.270 [2024-05-16 20:32:48.045426] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182e00 00:23:35.270 [2024-05-16 20:32:48.045432] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:35.270 [2024-05-16 20:32:48.045453] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:35.270 [2024-05-16 20:32:48.045458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:23:35.270 [2024-05-16 20:32:48.045463] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf668 length 0x10 lkey 0x182e00 00:23:35.270 [2024-05-16 20:32:48.045470] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182e00 00:23:35.270 [2024-05-16 20:32:48.045476] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:35.270 [2024-05-16 20:32:48.045492] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:35.270 [2024-05-16 20:32:48.045496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:23:35.270 [2024-05-16 20:32:48.045501] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf690 length 0x10 lkey 0x182e00 00:23:35.270 [2024-05-16 20:32:48.045507] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182e00 00:23:35.270 [2024-05-16 20:32:48.045513] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:35.270 [2024-05-16 20:32:48.045531] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:35.270 [2024-05-16 20:32:48.045535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0002 p:0 m:0 dnr:0 00:23:35.270 [2024-05-16 20:32:48.045540] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6b8 length 0x10 lkey 0x182e00 00:23:35.270 [2024-05-16 20:32:48.045546] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182e00 00:23:35.270 [2024-05-16 20:32:48.045552] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:35.270 [2024-05-16 20:32:48.045568] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:35.270 [2024-05-16 20:32:48.045572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0003 p:0 m:0 dnr:0 00:23:35.270 [2024-05-16 20:32:48.045577] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6e0 length 0x10 lkey 0x182e00 00:23:35.270 [2024-05-16 20:32:48.045583] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182e00 00:23:35.270 [2024-05-16 20:32:48.045589] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:35.270 [2024-05-16 20:32:48.045605] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:35.270 [2024-05-16 20:32:48.045610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0004 p:0 m:0 dnr:0 00:23:35.270 [2024-05-16 20:32:48.045614] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf708 length 0x10 lkey 0x182e00 00:23:35.270 [2024-05-16 20:32:48.045621] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182e00 00:23:35.270 [2024-05-16 20:32:48.045626] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:35.270 [2024-05-16 20:32:48.045651] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:35.270 [2024-05-16 20:32:48.045656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0005 p:0 m:0 dnr:0 00:23:35.270 [2024-05-16 20:32:48.045660] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf730 length 0x10 lkey 0x182e00 00:23:35.270 [2024-05-16 20:32:48.045666] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182e00 00:23:35.270 [2024-05-16 20:32:48.045673] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:35.271 [2024-05-16 20:32:48.045693] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:35.271 [2024-05-16 20:32:48.045699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0006 p:0 m:0 dnr:0 00:23:35.271 [2024-05-16 20:32:48.045703] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf758 length 0x10 lkey 0x182e00 00:23:35.271 [2024-05-16 20:32:48.045710] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182e00 00:23:35.271 [2024-05-16 20:32:48.045716] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:35.271 [2024-05-16 20:32:48.045735] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:35.271 [2024-05-16 20:32:48.045739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:23:35.271 [2024-05-16 20:32:48.045743] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf780 length 0x10 lkey 0x182e00 00:23:35.271 [2024-05-16 20:32:48.045750] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182e00 00:23:35.271 [2024-05-16 20:32:48.045756] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:35.271 [2024-05-16 20:32:48.045776] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:35.271 [2024-05-16 20:32:48.045781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0008 p:0 m:0 dnr:0 00:23:35.271 [2024-05-16 20:32:48.045785] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7a8 length 0x10 lkey 0x182e00 00:23:35.271 [2024-05-16 20:32:48.045792] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182e00 00:23:35.271 [2024-05-16 20:32:48.045798] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:35.271 [2024-05-16 20:32:48.045821] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:35.271 [2024-05-16 20:32:48.045825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0009 p:0 m:0 dnr:0 00:23:35.271 [2024-05-16 20:32:48.045829] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7d0 length 0x10 lkey 0x182e00 00:23:35.271 [2024-05-16 20:32:48.045836] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182e00 00:23:35.271 [2024-05-16 20:32:48.045842] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:35.271 [2024-05-16 20:32:48.045863] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:35.271 [2024-05-16 20:32:48.045867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000a p:0 m:0 dnr:0 00:23:35.271 [2024-05-16 20:32:48.045871] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7f8 length 0x10 lkey 0x182e00 00:23:35.271 [2024-05-16 20:32:48.045878] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182e00 00:23:35.271 [2024-05-16 20:32:48.045883] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:35.271 [2024-05-16 20:32:48.045904] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:35.271 [2024-05-16 20:32:48.045908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000b p:0 m:0 dnr:0 00:23:35.271 [2024-05-16 20:32:48.045913] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf820 length 0x10 lkey 0x182e00 00:23:35.271 [2024-05-16 20:32:48.045919] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182e00 00:23:35.271 [2024-05-16 20:32:48.045925] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:35.271 [2024-05-16 20:32:48.045947] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:35.271 [2024-05-16 20:32:48.045953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000c p:0 m:0 dnr:0 00:23:35.271 [2024-05-16 20:32:48.045957] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf848 length 0x10 lkey 0x182e00 00:23:35.271 [2024-05-16 20:32:48.045964] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182e00 00:23:35.271 [2024-05-16 20:32:48.045970] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:35.271 [2024-05-16 20:32:48.045987] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:35.271 [2024-05-16 20:32:48.045991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000d p:0 m:0 dnr:0 00:23:35.271 [2024-05-16 20:32:48.045996] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf870 length 0x10 lkey 0x182e00 00:23:35.271 [2024-05-16 20:32:48.046002] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182e00 00:23:35.271 [2024-05-16 20:32:48.046008] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:35.271 [2024-05-16 20:32:48.046026] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:35.271 [2024-05-16 20:32:48.046030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000e p:0 m:0 dnr:0 00:23:35.271 [2024-05-16 20:32:48.046034] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf898 length 0x10 lkey 0x182e00 00:23:35.271 [2024-05-16 20:32:48.046041] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182e00 00:23:35.271 [2024-05-16 20:32:48.046047] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:35.271 [2024-05-16 20:32:48.046062] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:35.271 [2024-05-16 20:32:48.046066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000f p:0 m:0 dnr:0 00:23:35.271 [2024-05-16 20:32:48.046070] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8c0 length 0x10 lkey 0x182e00 00:23:35.271 [2024-05-16 20:32:48.046077] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182e00 00:23:35.271 [2024-05-16 20:32:48.046083] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:35.271 [2024-05-16 20:32:48.046103] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:35.271 [2024-05-16 20:32:48.046108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0010 p:0 m:0 dnr:0 00:23:35.271 [2024-05-16 20:32:48.046112] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8e8 length 0x10 lkey 0x182e00 00:23:35.271 [2024-05-16 20:32:48.046119] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182e00 00:23:35.271 [2024-05-16 20:32:48.046125] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:35.271 [2024-05-16 20:32:48.046144] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:35.271 [2024-05-16 20:32:48.046148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0011 p:0 m:0 dnr:0 00:23:35.271 [2024-05-16 20:32:48.046153] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf910 length 0x10 lkey 0x182e00 00:23:35.271 [2024-05-16 20:32:48.046159] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182e00 00:23:35.271 [2024-05-16 20:32:48.046165] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:35.271 [2024-05-16 20:32:48.046187] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:35.271 [2024-05-16 20:32:48.046191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0012 p:0 m:0 dnr:0 00:23:35.271 [2024-05-16 20:32:48.046196] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf938 length 0x10 lkey 0x182e00 00:23:35.271 [2024-05-16 20:32:48.046202] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182e00 00:23:35.271 [2024-05-16 20:32:48.046208] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:35.271 [2024-05-16 20:32:48.046227] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:35.271 [2024-05-16 20:32:48.046231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0013 p:0 m:0 dnr:0 00:23:35.271 [2024-05-16 20:32:48.046236] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf960 length 0x10 lkey 0x182e00 00:23:35.271 [2024-05-16 20:32:48.046242] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182e00 00:23:35.271 [2024-05-16 20:32:48.046249] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:35.271 [2024-05-16 20:32:48.046269] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:35.271 [2024-05-16 20:32:48.046273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0014 p:0 m:0 dnr:0 00:23:35.271 [2024-05-16 20:32:48.046278] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf988 length 0x10 lkey 0x182e00 00:23:35.271 [2024-05-16 20:32:48.046284] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182e00 00:23:35.271 [2024-05-16 20:32:48.046290] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:35.271 [2024-05-16 20:32:48.046308] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:35.271 [2024-05-16 20:32:48.046312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0015 p:0 m:0 dnr:0 00:23:35.271 [2024-05-16 20:32:48.046316] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9b0 length 0x10 lkey 0x182e00 00:23:35.271 [2024-05-16 20:32:48.046323] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182e00 00:23:35.271 [2024-05-16 20:32:48.046329] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:35.271 [2024-05-16 20:32:48.046347] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:35.271 [2024-05-16 20:32:48.046351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0016 p:0 m:0 dnr:0 00:23:35.271 [2024-05-16 20:32:48.046355] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9d8 length 0x10 lkey 0x182e00 00:23:35.271 [2024-05-16 20:32:48.046362] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182e00 00:23:35.271 [2024-05-16 20:32:48.046368] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:35.271 [2024-05-16 20:32:48.046387] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:35.272 [2024-05-16 20:32:48.046391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0017 p:0 m:0 dnr:0 00:23:35.272 [2024-05-16 20:32:48.046396] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa00 length 0x10 lkey 0x182e00 00:23:35.272 [2024-05-16 20:32:48.046402] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182e00 00:23:35.272 [2024-05-16 20:32:48.046408] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:35.272 [2024-05-16 20:32:48.050427] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:35.272 [2024-05-16 20:32:48.050434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0018 p:0 m:0 dnr:0 00:23:35.272 [2024-05-16 20:32:48.050438] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa28 length 0x10 lkey 0x182e00 00:23:35.272 [2024-05-16 20:32:48.050445] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182e00 00:23:35.272 [2024-05-16 20:32:48.050452] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:35.272 [2024-05-16 20:32:48.050474] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:35.272 [2024-05-16 20:32:48.050478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:9 sqhd:0019 p:0 m:0 dnr:0 00:23:35.272 [2024-05-16 20:32:48.050482] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa50 length 0x10 lkey 0x182e00 00:23:35.272 [2024-05-16 20:32:48.050488] nvme_ctrlr.c:1205:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 6 milliseconds 00:23:35.272 128 00:23:35.272 Transport Service Identifier: 4420 00:23:35.272 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:23:35.272 Transport Address: 192.168.100.8 00:23:35.272 Transport Specific Address Subtype - RDMA 00:23:35.272 RDMA QP Service Type: 1 (Reliable Connected) 00:23:35.272 RDMA Provider Type: 1 (No provider specified) 00:23:35.272 RDMA CM Service: 1 (RDMA_CM) 00:23:35.272 20:32:48 nvmf_rdma.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:23:35.272 [2024-05-16 20:32:48.118150] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:23:35.272 [2024-05-16 20:32:48.118187] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3150017 ] 00:23:35.272 EAL: No free 2048 kB hugepages reported on node 1 00:23:35.272 [2024-05-16 20:32:48.158499] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:23:35.272 [2024-05-16 20:32:48.158566] nvme_rdma.c:2261:nvme_rdma_ctrlr_construct: *DEBUG*: successfully initialized the nvmf ctrlr 00:23:35.272 [2024-05-16 20:32:48.158581] nvme_rdma.c:1291:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: adrfam 1 ai_family 2 00:23:35.272 [2024-05-16 20:32:48.158584] nvme_rdma.c:1295:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: trsvcid is 4420 00:23:35.272 [2024-05-16 20:32:48.158604] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:23:35.272 [2024-05-16 20:32:48.169905] nvme_rdma.c: 510:nvme_rdma_qpair_process_cm_event: *DEBUG*: Requested queue depth 32. Target receive queue depth 32. 00:23:35.272 [2024-05-16 20:32:48.180152] nvme_rdma.c:1180:nvme_rdma_connect_established: *DEBUG*: rc =0 00:23:35.272 [2024-05-16 20:32:48.180160] nvme_rdma.c:1185:nvme_rdma_connect_established: *DEBUG*: RDMA requests created 00:23:35.272 [2024-05-16 20:32:48.180165] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf640 length 0x10 lkey 0x182e00 00:23:35.272 [2024-05-16 20:32:48.180170] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf668 length 0x10 lkey 0x182e00 00:23:35.272 [2024-05-16 20:32:48.180174] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf690 length 0x10 lkey 0x182e00 00:23:35.272 [2024-05-16 20:32:48.180181] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6b8 length 0x10 lkey 0x182e00 00:23:35.272 [2024-05-16 20:32:48.180185] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6e0 length 0x10 lkey 0x182e00 00:23:35.272 [2024-05-16 20:32:48.180189] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf708 length 0x10 lkey 0x182e00 00:23:35.272 [2024-05-16 20:32:48.180193] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf730 length 0x10 lkey 0x182e00 00:23:35.272 [2024-05-16 20:32:48.180197] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf758 length 0x10 lkey 0x182e00 00:23:35.272 [2024-05-16 20:32:48.180201] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf780 length 0x10 lkey 0x182e00 00:23:35.272 [2024-05-16 20:32:48.180205] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7a8 length 0x10 lkey 0x182e00 00:23:35.272 [2024-05-16 20:32:48.180210] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7d0 length 0x10 lkey 0x182e00 00:23:35.272 [2024-05-16 20:32:48.180214] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7f8 length 0x10 lkey 0x182e00 00:23:35.272 [2024-05-16 20:32:48.180218] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf820 length 0x10 lkey 0x182e00 00:23:35.272 [2024-05-16 20:32:48.180222] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf848 length 0x10 lkey 0x182e00 00:23:35.272 [2024-05-16 20:32:48.180226] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf870 length 0x10 lkey 0x182e00 00:23:35.272 [2024-05-16 20:32:48.180230] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf898 length 0x10 lkey 0x182e00 00:23:35.272 [2024-05-16 20:32:48.180234] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf8c0 length 0x10 lkey 0x182e00 00:23:35.272 [2024-05-16 20:32:48.180238] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf8e8 length 0x10 lkey 0x182e00 00:23:35.272 [2024-05-16 20:32:48.180242] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf910 length 0x10 lkey 0x182e00 00:23:35.272 [2024-05-16 20:32:48.180246] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf938 length 0x10 lkey 0x182e00 00:23:35.272 [2024-05-16 20:32:48.180251] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf960 length 0x10 lkey 0x182e00 00:23:35.272 [2024-05-16 20:32:48.180255] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf988 length 0x10 lkey 0x182e00 00:23:35.272 [2024-05-16 20:32:48.180259] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf9b0 length 0x10 lkey 0x182e00 00:23:35.272 [2024-05-16 20:32:48.180263] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf9d8 length 0x10 lkey 0x182e00 00:23:35.272 [2024-05-16 20:32:48.180267] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa00 length 0x10 lkey 0x182e00 00:23:35.272 [2024-05-16 20:32:48.180271] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa28 length 0x10 lkey 0x182e00 00:23:35.272 [2024-05-16 20:32:48.180275] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa50 length 0x10 lkey 0x182e00 00:23:35.272 [2024-05-16 20:32:48.180279] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa78 length 0x10 lkey 0x182e00 00:23:35.272 [2024-05-16 20:32:48.180283] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfaa0 length 0x10 lkey 0x182e00 00:23:35.272 [2024-05-16 20:32:48.180287] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfac8 length 0x10 lkey 0x182e00 00:23:35.272 [2024-05-16 20:32:48.180291] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfaf0 length 0x10 lkey 0x182e00 00:23:35.272 [2024-05-16 20:32:48.180295] nvme_rdma.c:1199:nvme_rdma_connect_established: *DEBUG*: RDMA responses created 00:23:35.272 [2024-05-16 20:32:48.180299] nvme_rdma.c:1202:nvme_rdma_connect_established: *DEBUG*: rc =0 00:23:35.272 [2024-05-16 20:32:48.180301] nvme_rdma.c:1207:nvme_rdma_connect_established: *DEBUG*: RDMA responses submitted 00:23:35.272 [2024-05-16 20:32:48.180312] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x182e00 00:23:35.272 [2024-05-16 20:32:48.180322] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf1c0 len:0x400 key:0x182e00 00:23:35.272 [2024-05-16 20:32:48.185428] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:35.272 [2024-05-16 20:32:48.185436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:23:35.272 [2024-05-16 20:32:48.185441] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf640 length 0x10 lkey 0x182e00 00:23:35.272 [2024-05-16 20:32:48.185446] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:23:35.272 [2024-05-16 20:32:48.185451] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:23:35.272 [2024-05-16 20:32:48.185456] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:23:35.272 [2024-05-16 20:32:48.185464] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x182e00 00:23:35.272 [2024-05-16 20:32:48.185471] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:35.272 [2024-05-16 20:32:48.185489] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:35.272 [2024-05-16 20:32:48.185493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:10300 sqhd:0002 p:0 m:0 dnr:0 00:23:35.272 [2024-05-16 20:32:48.185498] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:23:35.272 [2024-05-16 20:32:48.185502] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf668 length 0x10 lkey 0x182e00 00:23:35.272 [2024-05-16 20:32:48.185507] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:23:35.272 [2024-05-16 20:32:48.185512] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x182e00 00:23:35.272 [2024-05-16 20:32:48.185518] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:35.272 [2024-05-16 20:32:48.185536] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:35.272 [2024-05-16 20:32:48.185541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1e01007f sqhd:0003 p:0 m:0 dnr:0 00:23:35.272 [2024-05-16 20:32:48.185545] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:23:35.272 [2024-05-16 20:32:48.185549] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf690 length 0x10 lkey 0x182e00 00:23:35.272 [2024-05-16 20:32:48.185554] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:23:35.272 [2024-05-16 20:32:48.185560] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x182e00 00:23:35.272 [2024-05-16 20:32:48.185566] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:35.272 [2024-05-16 20:32:48.185581] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:35.272 [2024-05-16 20:32:48.185586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:23:35.272 [2024-05-16 20:32:48.185591] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:23:35.273 [2024-05-16 20:32:48.185594] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6b8 length 0x10 lkey 0x182e00 00:23:35.273 [2024-05-16 20:32:48.185601] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x182e00 00:23:35.273 [2024-05-16 20:32:48.185607] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:35.273 [2024-05-16 20:32:48.185622] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:35.273 [2024-05-16 20:32:48.185627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:23:35.273 [2024-05-16 20:32:48.185631] nvme_ctrlr.c:3750:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:23:35.273 [2024-05-16 20:32:48.185635] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:23:35.273 [2024-05-16 20:32:48.185638] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6e0 length 0x10 lkey 0x182e00 00:23:35.273 [2024-05-16 20:32:48.185644] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:23:35.273 [2024-05-16 20:32:48.185748] nvme_ctrlr.c:3943:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:23:35.273 [2024-05-16 20:32:48.185751] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:23:35.273 [2024-05-16 20:32:48.185758] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x182e00 00:23:35.273 [2024-05-16 20:32:48.185764] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:35.273 [2024-05-16 20:32:48.185785] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:35.273 [2024-05-16 20:32:48.185789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:23:35.273 [2024-05-16 20:32:48.185793] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:23:35.273 [2024-05-16 20:32:48.185797] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf708 length 0x10 lkey 0x182e00 00:23:35.273 [2024-05-16 20:32:48.185803] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x182e00 00:23:35.273 [2024-05-16 20:32:48.185809] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:35.273 [2024-05-16 20:32:48.185829] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:35.273 [2024-05-16 20:32:48.185833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:23:35.273 [2024-05-16 20:32:48.185837] nvme_ctrlr.c:3785:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:23:35.273 [2024-05-16 20:32:48.185841] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:23:35.273 [2024-05-16 20:32:48.185845] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf730 length 0x10 lkey 0x182e00 00:23:35.273 [2024-05-16 20:32:48.185850] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:23:35.273 [2024-05-16 20:32:48.185858] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:23:35.273 [2024-05-16 20:32:48.185865] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x182e00 00:23:35.273 [2024-05-16 20:32:48.185871] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x182e00 00:23:35.273 [2024-05-16 20:32:48.185914] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:35.273 [2024-05-16 20:32:48.185918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:23:35.273 [2024-05-16 20:32:48.185926] nvme_ctrlr.c:1985:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:23:35.273 [2024-05-16 20:32:48.185930] nvme_ctrlr.c:1989:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:23:35.273 [2024-05-16 20:32:48.185933] nvme_ctrlr.c:1992:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:23:35.273 [2024-05-16 20:32:48.185937] nvme_ctrlr.c:2016:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:23:35.273 [2024-05-16 20:32:48.185941] nvme_ctrlr.c:2031:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:23:35.273 [2024-05-16 20:32:48.185945] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:23:35.273 [2024-05-16 20:32:48.185949] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf758 length 0x10 lkey 0x182e00 00:23:35.273 [2024-05-16 20:32:48.185956] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:23:35.273 [2024-05-16 20:32:48.185963] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x182e00 00:23:35.273 [2024-05-16 20:32:48.185969] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:35.273 [2024-05-16 20:32:48.185986] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:35.273 [2024-05-16 20:32:48.185990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:23:35.273 [2024-05-16 20:32:48.185996] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d04c0 length 0x40 lkey 0x182e00 00:23:35.273 [2024-05-16 20:32:48.186001] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:35.273 [2024-05-16 20:32:48.186007] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0600 length 0x40 lkey 0x182e00 00:23:35.273 [2024-05-16 20:32:48.186012] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:35.273 [2024-05-16 20:32:48.186017] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182e00 00:23:35.273 [2024-05-16 20:32:48.186022] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:35.273 [2024-05-16 20:32:48.186026] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0880 length 0x40 lkey 0x182e00 00:23:35.273 [2024-05-16 20:32:48.186032] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:23:35.273 [2024-05-16 20:32:48.186035] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:23:35.273 [2024-05-16 20:32:48.186039] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf780 length 0x10 lkey 0x182e00 00:23:35.273 [2024-05-16 20:32:48.186047] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:23:35.273 [2024-05-16 20:32:48.186053] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x182e00 00:23:35.273 [2024-05-16 20:32:48.186059] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:0 cdw10:0000000f SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:35.273 [2024-05-16 20:32:48.186078] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:35.273 [2024-05-16 20:32:48.186082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:2710 sqhd:000a p:0 m:0 dnr:0 00:23:35.273 [2024-05-16 20:32:48.186086] nvme_ctrlr.c:2903:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:23:35.273 [2024-05-16 20:32:48.186092] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:23:35.273 [2024-05-16 20:32:48.186097] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7a8 length 0x10 lkey 0x182e00 00:23:35.273 [2024-05-16 20:32:48.186103] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:23:35.273 [2024-05-16 20:32:48.186108] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:23:35.273 [2024-05-16 20:32:48.186113] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x182e00 00:23:35.273 [2024-05-16 20:32:48.186119] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:35.273 [2024-05-16 20:32:48.186137] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:35.273 [2024-05-16 20:32:48.186141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:7e007e sqhd:000b p:0 m:0 dnr:0 00:23:35.273 [2024-05-16 20:32:48.186182] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:23:35.273 [2024-05-16 20:32:48.186187] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7d0 length 0x10 lkey 0x182e00 00:23:35.273 [2024-05-16 20:32:48.186192] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:23:35.273 [2024-05-16 20:32:48.186199] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x182e00 00:23:35.273 [2024-05-16 20:32:48.186205] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000002 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cc000 len:0x1000 key:0x182e00 00:23:35.273 [2024-05-16 20:32:48.186230] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:35.273 [2024-05-16 20:32:48.186235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:23:35.273 [2024-05-16 20:32:48.186245] nvme_ctrlr.c:4558:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:23:35.273 [2024-05-16 20:32:48.186255] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:23:35.273 [2024-05-16 20:32:48.186259] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7f8 length 0x10 lkey 0x182e00 00:23:35.273 [2024-05-16 20:32:48.186265] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:23:35.273 [2024-05-16 20:32:48.186271] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x182e00 00:23:35.273 [2024-05-16 20:32:48.186277] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:1 cdw10:00000000 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x182e00 00:23:35.273 [2024-05-16 20:32:48.186305] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:35.273 [2024-05-16 20:32:48.186309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:23:35.273 [2024-05-16 20:32:48.186317] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:23:35.273 [2024-05-16 20:32:48.186322] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf820 length 0x10 lkey 0x182e00 00:23:35.274 [2024-05-16 20:32:48.186327] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:23:35.274 [2024-05-16 20:32:48.186335] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x182e00 00:23:35.274 [2024-05-16 20:32:48.186341] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:1 cdw10:00000003 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x182e00 00:23:35.274 [2024-05-16 20:32:48.186368] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:35.274 [2024-05-16 20:32:48.186372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:23:35.274 [2024-05-16 20:32:48.186380] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:23:35.274 [2024-05-16 20:32:48.186384] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf848 length 0x10 lkey 0x182e00 00:23:35.274 [2024-05-16 20:32:48.186389] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:23:35.274 [2024-05-16 20:32:48.186395] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:23:35.274 [2024-05-16 20:32:48.186401] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:23:35.274 [2024-05-16 20:32:48.186405] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:23:35.274 [2024-05-16 20:32:48.186409] nvme_ctrlr.c:2991:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:23:35.274 [2024-05-16 20:32:48.186413] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:23:35.274 [2024-05-16 20:32:48.186417] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:23:35.274 [2024-05-16 20:32:48.186435] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x182e00 00:23:35.274 [2024-05-16 20:32:48.186441] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:0 cdw10:00000001 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:35.274 [2024-05-16 20:32:48.186447] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d09c0 length 0x40 lkey 0x182e00 00:23:35.274 [2024-05-16 20:32:48.186452] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:23:35.274 [2024-05-16 20:32:48.186461] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:35.274 [2024-05-16 20:32:48.186465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:23:35.274 [2024-05-16 20:32:48.186470] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf870 length 0x10 lkey 0x182e00 00:23:35.274 [2024-05-16 20:32:48.186476] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x182e00 00:23:35.274 [2024-05-16 20:32:48.186482] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:0 cdw10:00000002 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:35.274 [2024-05-16 20:32:48.186487] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:35.274 [2024-05-16 20:32:48.186492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:23:35.274 [2024-05-16 20:32:48.186496] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf898 length 0x10 lkey 0x182e00 00:23:35.274 [2024-05-16 20:32:48.186504] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:35.274 [2024-05-16 20:32:48.186508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:23:35.274 [2024-05-16 20:32:48.186512] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8c0 length 0x10 lkey 0x182e00 00:23:35.274 [2024-05-16 20:32:48.186520] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x182e00 00:23:35.274 [2024-05-16 20:32:48.186526] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:0 cdw10:00000004 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:35.274 [2024-05-16 20:32:48.186546] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:35.274 [2024-05-16 20:32:48.186550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:23:35.274 [2024-05-16 20:32:48.186554] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8e8 length 0x10 lkey 0x182e00 00:23:35.274 [2024-05-16 20:32:48.186561] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x182e00 00:23:35.274 [2024-05-16 20:32:48.186566] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:35.274 [2024-05-16 20:32:48.186585] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:35.274 [2024-05-16 20:32:48.186589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:7e007e sqhd:0013 p:0 m:0 dnr:0 00:23:35.274 [2024-05-16 20:32:48.186593] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf910 length 0x10 lkey 0x182e00 00:23:35.274 [2024-05-16 20:32:48.186601] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x182e00 00:23:35.274 [2024-05-16 20:32:48.186607] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:0 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003c9000 len:0x2000 key:0x182e00 00:23:35.274 [2024-05-16 20:32:48.186613] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d09c0 length 0x40 lkey 0x182e00 00:23:35.274 [2024-05-16 20:32:48.186619] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf000 len:0x200 key:0x182e00 00:23:35.274 [2024-05-16 20:32:48.186625] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0b00 length 0x40 lkey 0x182e00 00:23:35.274 [2024-05-16 20:32:48.186630] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x200 key:0x182e00 00:23:35.274 [2024-05-16 20:32:48.186639] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0c40 length 0x40 lkey 0x182e00 00:23:35.274 [2024-05-16 20:32:48.186644] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003c7000 len:0x1000 key:0x182e00 00:23:35.274 [2024-05-16 20:32:48.186650] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:35.274 [2024-05-16 20:32:48.186654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:23:35.274 [2024-05-16 20:32:48.186663] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf938 length 0x10 lkey 0x182e00 00:23:35.274 [2024-05-16 20:32:48.186673] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:35.274 [2024-05-16 20:32:48.186678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:23:35.274 [2024-05-16 20:32:48.186684] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf960 length 0x10 lkey 0x182e00 00:23:35.274 [2024-05-16 20:32:48.186688] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:35.274 [2024-05-16 20:32:48.186692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:6 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:23:35.274 [2024-05-16 20:32:48.186698] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf988 length 0x10 lkey 0x182e00 00:23:35.274 [2024-05-16 20:32:48.186704] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:35.274 [2024-05-16 20:32:48.186708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:7 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:23:35.274 [2024-05-16 20:32:48.186714] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9b0 length 0x10 lkey 0x182e00 00:23:35.274 ===================================================== 00:23:35.274 NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:23:35.274 ===================================================== 00:23:35.274 Controller Capabilities/Features 00:23:35.274 ================================ 00:23:35.274 Vendor ID: 8086 00:23:35.274 Subsystem Vendor ID: 8086 00:23:35.274 Serial Number: SPDK00000000000001 00:23:35.274 Model Number: SPDK bdev Controller 00:23:35.274 Firmware Version: 24.09 00:23:35.274 Recommended Arb Burst: 6 00:23:35.274 IEEE OUI Identifier: e4 d2 5c 00:23:35.274 Multi-path I/O 00:23:35.274 May have multiple subsystem ports: Yes 00:23:35.274 May have multiple controllers: Yes 00:23:35.274 Associated with SR-IOV VF: No 00:23:35.274 Max Data Transfer Size: 131072 00:23:35.274 Max Number of Namespaces: 32 00:23:35.274 Max Number of I/O Queues: 127 00:23:35.274 NVMe Specification Version (VS): 1.3 00:23:35.274 NVMe Specification Version (Identify): 1.3 00:23:35.274 Maximum Queue Entries: 128 00:23:35.275 Contiguous Queues Required: Yes 00:23:35.275 Arbitration Mechanisms Supported 00:23:35.275 Weighted Round Robin: Not Supported 00:23:35.275 Vendor Specific: Not Supported 00:23:35.275 Reset Timeout: 15000 ms 00:23:35.275 Doorbell Stride: 4 bytes 00:23:35.275 NVM Subsystem Reset: Not Supported 00:23:35.275 Command Sets Supported 00:23:35.275 NVM Command Set: Supported 00:23:35.275 Boot Partition: Not Supported 00:23:35.275 Memory Page Size Minimum: 4096 bytes 00:23:35.275 Memory Page Size Maximum: 4096 bytes 00:23:35.275 Persistent Memory Region: Not Supported 00:23:35.275 Optional Asynchronous Events Supported 00:23:35.275 Namespace Attribute Notices: Supported 00:23:35.275 Firmware Activation Notices: Not Supported 00:23:35.275 ANA Change Notices: Not Supported 00:23:35.275 PLE Aggregate Log Change Notices: Not Supported 00:23:35.275 LBA Status Info Alert Notices: Not Supported 00:23:35.275 EGE Aggregate Log Change Notices: Not Supported 00:23:35.275 Normal NVM Subsystem Shutdown event: Not Supported 00:23:35.275 Zone Descriptor Change Notices: Not Supported 00:23:35.275 Discovery Log Change Notices: Not Supported 00:23:35.275 Controller Attributes 00:23:35.275 128-bit Host Identifier: Supported 00:23:35.275 Non-Operational Permissive Mode: Not Supported 00:23:35.275 NVM Sets: Not Supported 00:23:35.275 Read Recovery Levels: Not Supported 00:23:35.275 Endurance Groups: Not Supported 00:23:35.275 Predictable Latency Mode: Not Supported 00:23:35.275 Traffic Based Keep ALive: Not Supported 00:23:35.275 Namespace Granularity: Not Supported 00:23:35.275 SQ Associations: Not Supported 00:23:35.275 UUID List: Not Supported 00:23:35.275 Multi-Domain Subsystem: Not Supported 00:23:35.275 Fixed Capacity Management: Not Supported 00:23:35.275 Variable Capacity Management: Not Supported 00:23:35.275 Delete Endurance Group: Not Supported 00:23:35.275 Delete NVM Set: Not Supported 00:23:35.275 Extended LBA Formats Supported: Not Supported 00:23:35.275 Flexible Data Placement Supported: Not Supported 00:23:35.275 00:23:35.275 Controller Memory Buffer Support 00:23:35.275 ================================ 00:23:35.275 Supported: No 00:23:35.275 00:23:35.275 Persistent Memory Region Support 00:23:35.275 ================================ 00:23:35.275 Supported: No 00:23:35.275 00:23:35.275 Admin Command Set Attributes 00:23:35.275 ============================ 00:23:35.275 Security Send/Receive: Not Supported 00:23:35.275 Format NVM: Not Supported 00:23:35.275 Firmware Activate/Download: Not Supported 00:23:35.275 Namespace Management: Not Supported 00:23:35.275 Device Self-Test: Not Supported 00:23:35.275 Directives: Not Supported 00:23:35.275 NVMe-MI: Not Supported 00:23:35.275 Virtualization Management: Not Supported 00:23:35.275 Doorbell Buffer Config: Not Supported 00:23:35.275 Get LBA Status Capability: Not Supported 00:23:35.275 Command & Feature Lockdown Capability: Not Supported 00:23:35.275 Abort Command Limit: 4 00:23:35.275 Async Event Request Limit: 4 00:23:35.275 Number of Firmware Slots: N/A 00:23:35.275 Firmware Slot 1 Read-Only: N/A 00:23:35.275 Firmware Activation Without Reset: N/A 00:23:35.275 Multiple Update Detection Support: N/A 00:23:35.275 Firmware Update Granularity: No Information Provided 00:23:35.275 Per-Namespace SMART Log: No 00:23:35.275 Asymmetric Namespace Access Log Page: Not Supported 00:23:35.275 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:23:35.275 Command Effects Log Page: Supported 00:23:35.275 Get Log Page Extended Data: Supported 00:23:35.275 Telemetry Log Pages: Not Supported 00:23:35.275 Persistent Event Log Pages: Not Supported 00:23:35.275 Supported Log Pages Log Page: May Support 00:23:35.275 Commands Supported & Effects Log Page: Not Supported 00:23:35.275 Feature Identifiers & Effects Log Page:May Support 00:23:35.275 NVMe-MI Commands & Effects Log Page: May Support 00:23:35.275 Data Area 4 for Telemetry Log: Not Supported 00:23:35.275 Error Log Page Entries Supported: 128 00:23:35.275 Keep Alive: Supported 00:23:35.275 Keep Alive Granularity: 10000 ms 00:23:35.275 00:23:35.275 NVM Command Set Attributes 00:23:35.275 ========================== 00:23:35.275 Submission Queue Entry Size 00:23:35.275 Max: 64 00:23:35.275 Min: 64 00:23:35.275 Completion Queue Entry Size 00:23:35.275 Max: 16 00:23:35.275 Min: 16 00:23:35.275 Number of Namespaces: 32 00:23:35.275 Compare Command: Supported 00:23:35.275 Write Uncorrectable Command: Not Supported 00:23:35.275 Dataset Management Command: Supported 00:23:35.275 Write Zeroes Command: Supported 00:23:35.275 Set Features Save Field: Not Supported 00:23:35.275 Reservations: Supported 00:23:35.275 Timestamp: Not Supported 00:23:35.275 Copy: Supported 00:23:35.275 Volatile Write Cache: Present 00:23:35.275 Atomic Write Unit (Normal): 1 00:23:35.275 Atomic Write Unit (PFail): 1 00:23:35.275 Atomic Compare & Write Unit: 1 00:23:35.275 Fused Compare & Write: Supported 00:23:35.275 Scatter-Gather List 00:23:35.275 SGL Command Set: Supported 00:23:35.275 SGL Keyed: Supported 00:23:35.275 SGL Bit Bucket Descriptor: Not Supported 00:23:35.275 SGL Metadata Pointer: Not Supported 00:23:35.275 Oversized SGL: Not Supported 00:23:35.275 SGL Metadata Address: Not Supported 00:23:35.275 SGL Offset: Supported 00:23:35.275 Transport SGL Data Block: Not Supported 00:23:35.275 Replay Protected Memory Block: Not Supported 00:23:35.275 00:23:35.275 Firmware Slot Information 00:23:35.275 ========================= 00:23:35.275 Active slot: 1 00:23:35.275 Slot 1 Firmware Revision: 24.09 00:23:35.275 00:23:35.275 00:23:35.275 Commands Supported and Effects 00:23:35.275 ============================== 00:23:35.275 Admin Commands 00:23:35.275 -------------- 00:23:35.275 Get Log Page (02h): Supported 00:23:35.275 Identify (06h): Supported 00:23:35.275 Abort (08h): Supported 00:23:35.275 Set Features (09h): Supported 00:23:35.275 Get Features (0Ah): Supported 00:23:35.275 Asynchronous Event Request (0Ch): Supported 00:23:35.275 Keep Alive (18h): Supported 00:23:35.275 I/O Commands 00:23:35.275 ------------ 00:23:35.275 Flush (00h): Supported LBA-Change 00:23:35.275 Write (01h): Supported LBA-Change 00:23:35.275 Read (02h): Supported 00:23:35.275 Compare (05h): Supported 00:23:35.275 Write Zeroes (08h): Supported LBA-Change 00:23:35.275 Dataset Management (09h): Supported LBA-Change 00:23:35.275 Copy (19h): Supported LBA-Change 00:23:35.275 Unknown (79h): Supported LBA-Change 00:23:35.275 Unknown (7Ah): Supported 00:23:35.275 00:23:35.275 Error Log 00:23:35.275 ========= 00:23:35.275 00:23:35.275 Arbitration 00:23:35.275 =========== 00:23:35.275 Arbitration Burst: 1 00:23:35.275 00:23:35.275 Power Management 00:23:35.275 ================ 00:23:35.275 Number of Power States: 1 00:23:35.275 Current Power State: Power State #0 00:23:35.275 Power State #0: 00:23:35.275 Max Power: 0.00 W 00:23:35.275 Non-Operational State: Operational 00:23:35.275 Entry Latency: Not Reported 00:23:35.275 Exit Latency: Not Reported 00:23:35.275 Relative Read Throughput: 0 00:23:35.275 Relative Read Latency: 0 00:23:35.275 Relative Write Throughput: 0 00:23:35.275 Relative Write Latency: 0 00:23:35.275 Idle Power: Not Reported 00:23:35.275 Active Power: Not Reported 00:23:35.275 Non-Operational Permissive Mode: Not Supported 00:23:35.275 00:23:35.275 Health Information 00:23:35.275 ================== 00:23:35.275 Critical Warnings: 00:23:35.275 Available Spare Space: OK 00:23:35.275 Temperature: OK 00:23:35.275 Device Reliability: OK 00:23:35.275 Read Only: No 00:23:35.275 Volatile Memory Backup: OK 00:23:35.275 Current Temperature: 0 Kelvin (-273 Celsius) 00:23:35.275 Temperature Threshold: [2024-05-16 20:32:48.186789] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0c40 length 0x40 lkey 0x182e00 00:23:35.275 [2024-05-16 20:32:48.186796] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:35.275 [2024-05-16 20:32:48.186816] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:35.275 [2024-05-16 20:32:48.186820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:7 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:23:35.275 [2024-05-16 20:32:48.186825] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9d8 length 0x10 lkey 0x182e00 00:23:35.275 [2024-05-16 20:32:48.186845] nvme_ctrlr.c:4222:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:23:35.275 [2024-05-16 20:32:48.186852] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 11179 doesn't match qid 00:23:35.275 [2024-05-16 20:32:48.186863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32586 cdw0:5 sqhd:a310 p:0 m:0 dnr:0 00:23:35.276 [2024-05-16 20:32:48.186868] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 11179 doesn't match qid 00:23:35.276 [2024-05-16 20:32:48.186874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32586 cdw0:5 sqhd:a310 p:0 m:0 dnr:0 00:23:35.276 [2024-05-16 20:32:48.186879] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 11179 doesn't match qid 00:23:35.276 [2024-05-16 20:32:48.186884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32586 cdw0:5 sqhd:a310 p:0 m:0 dnr:0 00:23:35.276 [2024-05-16 20:32:48.186889] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 11179 doesn't match qid 00:23:35.276 [2024-05-16 20:32:48.186894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32586 cdw0:5 sqhd:a310 p:0 m:0 dnr:0 00:23:35.276 [2024-05-16 20:32:48.186901] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0880 length 0x40 lkey 0x182e00 00:23:35.276 [2024-05-16 20:32:48.186907] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:4 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:35.276 [2024-05-16 20:32:48.186925] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:35.276 [2024-05-16 20:32:48.186930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:4 cdw0:460001 sqhd:0019 p:0 m:0 dnr:0 00:23:35.276 [2024-05-16 20:32:48.186935] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182e00 00:23:35.276 [2024-05-16 20:32:48.186941] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:35.276 [2024-05-16 20:32:48.186946] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa00 length 0x10 lkey 0x182e00 00:23:35.276 [2024-05-16 20:32:48.186963] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:35.276 [2024-05-16 20:32:48.186967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:23:35.276 [2024-05-16 20:32:48.186971] nvme_ctrlr.c:1083:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:23:35.276 [2024-05-16 20:32:48.186975] nvme_ctrlr.c:1086:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:23:35.276 [2024-05-16 20:32:48.186979] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa28 length 0x10 lkey 0x182e00 00:23:35.276 [2024-05-16 20:32:48.186985] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182e00 00:23:35.276 [2024-05-16 20:32:48.186994] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:35.276 [2024-05-16 20:32:48.187014] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:35.276 [2024-05-16 20:32:48.187018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001b p:0 m:0 dnr:0 00:23:35.276 [2024-05-16 20:32:48.187022] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa50 length 0x10 lkey 0x182e00 00:23:35.276 [2024-05-16 20:32:48.187029] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182e00 00:23:35.276 [2024-05-16 20:32:48.187035] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:35.276 [2024-05-16 20:32:48.187059] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:35.276 [2024-05-16 20:32:48.187063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001c p:0 m:0 dnr:0 00:23:35.276 [2024-05-16 20:32:48.187067] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa78 length 0x10 lkey 0x182e00 00:23:35.276 [2024-05-16 20:32:48.187074] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182e00 00:23:35.276 [2024-05-16 20:32:48.187080] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:35.276 [2024-05-16 20:32:48.187106] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:35.276 [2024-05-16 20:32:48.187110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001d p:0 m:0 dnr:0 00:23:35.276 [2024-05-16 20:32:48.187115] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaa0 length 0x10 lkey 0x182e00 00:23:35.276 [2024-05-16 20:32:48.187121] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182e00 00:23:35.276 [2024-05-16 20:32:48.187127] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:35.276 [2024-05-16 20:32:48.187148] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:35.276 [2024-05-16 20:32:48.187153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001e p:0 m:0 dnr:0 00:23:35.276 [2024-05-16 20:32:48.187158] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfac8 length 0x10 lkey 0x182e00 00:23:35.276 [2024-05-16 20:32:48.187164] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182e00 00:23:35.276 [2024-05-16 20:32:48.187170] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:35.276 [2024-05-16 20:32:48.187189] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:35.276 [2024-05-16 20:32:48.187193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001f p:0 m:0 dnr:0 00:23:35.276 [2024-05-16 20:32:48.187198] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaf0 length 0x10 lkey 0x182e00 00:23:35.276 [2024-05-16 20:32:48.187204] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182e00 00:23:35.276 [2024-05-16 20:32:48.187211] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:35.276 [2024-05-16 20:32:48.187232] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:35.276 [2024-05-16 20:32:48.187236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:23:35.276 [2024-05-16 20:32:48.187240] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf640 length 0x10 lkey 0x182e00 00:23:35.276 [2024-05-16 20:32:48.187247] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182e00 00:23:35.276 [2024-05-16 20:32:48.187254] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:35.276 [2024-05-16 20:32:48.187272] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:35.276 [2024-05-16 20:32:48.187276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:23:35.276 [2024-05-16 20:32:48.187281] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf668 length 0x10 lkey 0x182e00 00:23:35.276 [2024-05-16 20:32:48.187288] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182e00 00:23:35.276 [2024-05-16 20:32:48.187294] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:35.276 [2024-05-16 20:32:48.187311] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:35.276 [2024-05-16 20:32:48.187315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0002 p:0 m:0 dnr:0 00:23:35.276 [2024-05-16 20:32:48.187319] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf690 length 0x10 lkey 0x182e00 00:23:35.276 [2024-05-16 20:32:48.187326] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182e00 00:23:35.276 [2024-05-16 20:32:48.187332] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:35.276 [2024-05-16 20:32:48.187348] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:35.276 [2024-05-16 20:32:48.187352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0003 p:0 m:0 dnr:0 00:23:35.276 [2024-05-16 20:32:48.187357] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6b8 length 0x10 lkey 0x182e00 00:23:35.276 [2024-05-16 20:32:48.187363] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182e00 00:23:35.276 [2024-05-16 20:32:48.187369] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:35.276 [2024-05-16 20:32:48.187390] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:35.276 [2024-05-16 20:32:48.187394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0004 p:0 m:0 dnr:0 00:23:35.276 [2024-05-16 20:32:48.187398] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6e0 length 0x10 lkey 0x182e00 00:23:35.276 [2024-05-16 20:32:48.187405] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182e00 00:23:35.276 [2024-05-16 20:32:48.187411] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:35.276 [2024-05-16 20:32:48.187433] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:35.276 [2024-05-16 20:32:48.187437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0005 p:0 m:0 dnr:0 00:23:35.276 [2024-05-16 20:32:48.187441] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf708 length 0x10 lkey 0x182e00 00:23:35.276 [2024-05-16 20:32:48.187448] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182e00 00:23:35.276 [2024-05-16 20:32:48.187454] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:35.276 [2024-05-16 20:32:48.187472] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:35.276 [2024-05-16 20:32:48.187476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0006 p:0 m:0 dnr:0 00:23:35.276 [2024-05-16 20:32:48.187480] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf730 length 0x10 lkey 0x182e00 00:23:35.276 [2024-05-16 20:32:48.187488] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182e00 00:23:35.276 [2024-05-16 20:32:48.187495] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:35.276 [2024-05-16 20:32:48.187514] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:35.276 [2024-05-16 20:32:48.187518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:23:35.276 [2024-05-16 20:32:48.187522] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf758 length 0x10 lkey 0x182e00 00:23:35.276 [2024-05-16 20:32:48.187529] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182e00 00:23:35.276 [2024-05-16 20:32:48.187535] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:35.276 [2024-05-16 20:32:48.187556] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:35.276 [2024-05-16 20:32:48.187560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0008 p:0 m:0 dnr:0 00:23:35.276 [2024-05-16 20:32:48.187564] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf780 length 0x10 lkey 0x182e00 00:23:35.276 [2024-05-16 20:32:48.187571] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182e00 00:23:35.276 [2024-05-16 20:32:48.187577] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:35.277 [2024-05-16 20:32:48.187593] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:35.277 [2024-05-16 20:32:48.187597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0009 p:0 m:0 dnr:0 00:23:35.277 [2024-05-16 20:32:48.187601] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7a8 length 0x10 lkey 0x182e00 00:23:35.277 [2024-05-16 20:32:48.187608] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182e00 00:23:35.277 [2024-05-16 20:32:48.187614] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:35.277 [2024-05-16 20:32:48.187635] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:35.277 [2024-05-16 20:32:48.187639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000a p:0 m:0 dnr:0 00:23:35.277 [2024-05-16 20:32:48.187643] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7d0 length 0x10 lkey 0x182e00 00:23:35.277 [2024-05-16 20:32:48.187650] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182e00 00:23:35.277 [2024-05-16 20:32:48.187656] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:35.277 [2024-05-16 20:32:48.187675] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:35.277 [2024-05-16 20:32:48.187679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000b p:0 m:0 dnr:0 00:23:35.277 [2024-05-16 20:32:48.187683] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7f8 length 0x10 lkey 0x182e00 00:23:35.277 [2024-05-16 20:32:48.187690] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182e00 00:23:35.277 [2024-05-16 20:32:48.187696] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:35.277 [2024-05-16 20:32:48.187713] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:35.277 [2024-05-16 20:32:48.187717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000c p:0 m:0 dnr:0 00:23:35.277 [2024-05-16 20:32:48.187723] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf820 length 0x10 lkey 0x182e00 00:23:35.277 [2024-05-16 20:32:48.187730] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182e00 00:23:35.277 [2024-05-16 20:32:48.187736] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:35.277 [2024-05-16 20:32:48.187755] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:35.277 [2024-05-16 20:32:48.187759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000d p:0 m:0 dnr:0 00:23:35.277 [2024-05-16 20:32:48.187764] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf848 length 0x10 lkey 0x182e00 00:23:35.277 [2024-05-16 20:32:48.187770] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182e00 00:23:35.277 [2024-05-16 20:32:48.187776] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:35.277 [2024-05-16 20:32:48.187794] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:35.277 [2024-05-16 20:32:48.187798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000e p:0 m:0 dnr:0 00:23:35.277 [2024-05-16 20:32:48.187803] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf870 length 0x10 lkey 0x182e00 00:23:35.277 [2024-05-16 20:32:48.187809] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182e00 00:23:35.277 [2024-05-16 20:32:48.187815] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:35.277 [2024-05-16 20:32:48.187838] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:35.277 [2024-05-16 20:32:48.187842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000f p:0 m:0 dnr:0 00:23:35.277 [2024-05-16 20:32:48.187846] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf898 length 0x10 lkey 0x182e00 00:23:35.277 [2024-05-16 20:32:48.187852] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182e00 00:23:35.277 [2024-05-16 20:32:48.187858] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:35.277 [2024-05-16 20:32:48.187877] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:35.277 [2024-05-16 20:32:48.187882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0010 p:0 m:0 dnr:0 00:23:35.277 [2024-05-16 20:32:48.187886] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8c0 length 0x10 lkey 0x182e00 00:23:35.277 [2024-05-16 20:32:48.187892] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182e00 00:23:35.277 [2024-05-16 20:32:48.187898] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:35.277 [2024-05-16 20:32:48.187916] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:35.277 [2024-05-16 20:32:48.187920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0011 p:0 m:0 dnr:0 00:23:35.277 [2024-05-16 20:32:48.187924] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8e8 length 0x10 lkey 0x182e00 00:23:35.277 [2024-05-16 20:32:48.187931] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182e00 00:23:35.277 [2024-05-16 20:32:48.187937] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:35.277 [2024-05-16 20:32:48.187952] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:35.277 [2024-05-16 20:32:48.187956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0012 p:0 m:0 dnr:0 00:23:35.277 [2024-05-16 20:32:48.187961] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf910 length 0x10 lkey 0x182e00 00:23:35.277 [2024-05-16 20:32:48.187968] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182e00 00:23:35.277 [2024-05-16 20:32:48.187974] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:35.277 [2024-05-16 20:32:48.187993] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:35.277 [2024-05-16 20:32:48.187997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0013 p:0 m:0 dnr:0 00:23:35.277 [2024-05-16 20:32:48.188002] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf938 length 0x10 lkey 0x182e00 00:23:35.277 [2024-05-16 20:32:48.188008] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182e00 00:23:35.277 [2024-05-16 20:32:48.188014] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:35.277 [2024-05-16 20:32:48.188031] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:35.277 [2024-05-16 20:32:48.188035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0014 p:0 m:0 dnr:0 00:23:35.277 [2024-05-16 20:32:48.188039] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf960 length 0x10 lkey 0x182e00 00:23:35.277 [2024-05-16 20:32:48.188046] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182e00 00:23:35.277 [2024-05-16 20:32:48.188051] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:35.277 [2024-05-16 20:32:48.188072] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:35.277 [2024-05-16 20:32:48.188076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0015 p:0 m:0 dnr:0 00:23:35.277 [2024-05-16 20:32:48.188080] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf988 length 0x10 lkey 0x182e00 00:23:35.277 [2024-05-16 20:32:48.188087] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182e00 00:23:35.277 [2024-05-16 20:32:48.188093] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:35.277 [2024-05-16 20:32:48.188115] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:35.277 [2024-05-16 20:32:48.188119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0016 p:0 m:0 dnr:0 00:23:35.277 [2024-05-16 20:32:48.188124] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9b0 length 0x10 lkey 0x182e00 00:23:35.277 [2024-05-16 20:32:48.188130] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182e00 00:23:35.277 [2024-05-16 20:32:48.188136] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:35.277 [2024-05-16 20:32:48.188159] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:35.277 [2024-05-16 20:32:48.188162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0017 p:0 m:0 dnr:0 00:23:35.277 [2024-05-16 20:32:48.188167] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9d8 length 0x10 lkey 0x182e00 00:23:35.277 [2024-05-16 20:32:48.188173] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182e00 00:23:35.277 [2024-05-16 20:32:48.188179] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:35.277 [2024-05-16 20:32:48.188194] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:35.277 [2024-05-16 20:32:48.188198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0018 p:0 m:0 dnr:0 00:23:35.277 [2024-05-16 20:32:48.188204] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa00 length 0x10 lkey 0x182e00 00:23:35.277 [2024-05-16 20:32:48.188210] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182e00 00:23:35.277 [2024-05-16 20:32:48.188216] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:35.277 [2024-05-16 20:32:48.188237] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:35.277 [2024-05-16 20:32:48.188241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0019 p:0 m:0 dnr:0 00:23:35.277 [2024-05-16 20:32:48.188245] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa28 length 0x10 lkey 0x182e00 00:23:35.277 [2024-05-16 20:32:48.188252] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182e00 00:23:35.277 [2024-05-16 20:32:48.188258] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:35.277 [2024-05-16 20:32:48.188274] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:35.277 [2024-05-16 20:32:48.188278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001a p:0 m:0 dnr:0 00:23:35.277 [2024-05-16 20:32:48.188282] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa50 length 0x10 lkey 0x182e00 00:23:35.278 [2024-05-16 20:32:48.188289] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182e00 00:23:35.278 [2024-05-16 20:32:48.188295] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:35.278 [2024-05-16 20:32:48.188311] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:35.278 [2024-05-16 20:32:48.188315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001b p:0 m:0 dnr:0 00:23:35.278 [2024-05-16 20:32:48.188319] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa78 length 0x10 lkey 0x182e00 00:23:35.278 [2024-05-16 20:32:48.188326] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182e00 00:23:35.278 [2024-05-16 20:32:48.188332] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:35.278 [2024-05-16 20:32:48.188351] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:35.278 [2024-05-16 20:32:48.188355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001c p:0 m:0 dnr:0 00:23:35.278 [2024-05-16 20:32:48.188360] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaa0 length 0x10 lkey 0x182e00 00:23:35.278 [2024-05-16 20:32:48.188366] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182e00 00:23:35.278 [2024-05-16 20:32:48.188372] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:35.278 [2024-05-16 20:32:48.188389] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:35.278 [2024-05-16 20:32:48.188393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001d p:0 m:0 dnr:0 00:23:35.278 [2024-05-16 20:32:48.188397] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfac8 length 0x10 lkey 0x182e00 00:23:35.278 [2024-05-16 20:32:48.188403] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182e00 00:23:35.278 [2024-05-16 20:32:48.188409] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:35.278 [2024-05-16 20:32:48.188436] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:35.278 [2024-05-16 20:32:48.188441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001e p:0 m:0 dnr:0 00:23:35.278 [2024-05-16 20:32:48.188446] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaf0 length 0x10 lkey 0x182e00 00:23:35.278 [2024-05-16 20:32:48.188452] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182e00 00:23:35.278 [2024-05-16 20:32:48.188458] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:35.278 [2024-05-16 20:32:48.188473] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:35.278 [2024-05-16 20:32:48.188477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001f p:0 m:0 dnr:0 00:23:35.278 [2024-05-16 20:32:48.188481] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf640 length 0x10 lkey 0x182e00 00:23:35.278 [2024-05-16 20:32:48.188488] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182e00 00:23:35.278 [2024-05-16 20:32:48.188494] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:35.278 [2024-05-16 20:32:48.188516] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:35.278 [2024-05-16 20:32:48.188520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:23:35.278 [2024-05-16 20:32:48.188525] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf668 length 0x10 lkey 0x182e00 00:23:35.278 [2024-05-16 20:32:48.188531] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182e00 00:23:35.278 [2024-05-16 20:32:48.188537] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:35.278 [2024-05-16 20:32:48.188554] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:35.278 [2024-05-16 20:32:48.188558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:23:35.278 [2024-05-16 20:32:48.188562] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf690 length 0x10 lkey 0x182e00 00:23:35.278 [2024-05-16 20:32:48.188569] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182e00 00:23:35.278 [2024-05-16 20:32:48.188574] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:35.278 [2024-05-16 20:32:48.188591] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:35.278 [2024-05-16 20:32:48.188595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0002 p:0 m:0 dnr:0 00:23:35.278 [2024-05-16 20:32:48.188599] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6b8 length 0x10 lkey 0x182e00 00:23:35.278 [2024-05-16 20:32:48.188606] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182e00 00:23:35.278 [2024-05-16 20:32:48.188612] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:35.278 [2024-05-16 20:32:48.188634] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:35.278 [2024-05-16 20:32:48.188638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0003 p:0 m:0 dnr:0 00:23:35.278 [2024-05-16 20:32:48.188642] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6e0 length 0x10 lkey 0x182e00 00:23:35.278 [2024-05-16 20:32:48.188649] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182e00 00:23:35.278 [2024-05-16 20:32:48.188654] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:35.278 [2024-05-16 20:32:48.188677] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:35.278 [2024-05-16 20:32:48.188681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0004 p:0 m:0 dnr:0 00:23:35.278 [2024-05-16 20:32:48.188685] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf708 length 0x10 lkey 0x182e00 00:23:35.278 [2024-05-16 20:32:48.188692] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182e00 00:23:35.278 [2024-05-16 20:32:48.188698] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:35.278 [2024-05-16 20:32:48.188717] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:35.278 [2024-05-16 20:32:48.188721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0005 p:0 m:0 dnr:0 00:23:35.278 [2024-05-16 20:32:48.188726] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf730 length 0x10 lkey 0x182e00 00:23:35.278 [2024-05-16 20:32:48.188732] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182e00 00:23:35.278 [2024-05-16 20:32:48.188738] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:35.278 [2024-05-16 20:32:48.188762] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:35.278 [2024-05-16 20:32:48.188766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0006 p:0 m:0 dnr:0 00:23:35.278 [2024-05-16 20:32:48.188770] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf758 length 0x10 lkey 0x182e00 00:23:35.278 [2024-05-16 20:32:48.188777] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182e00 00:23:35.278 [2024-05-16 20:32:48.188782] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:35.278 [2024-05-16 20:32:48.188805] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:35.278 [2024-05-16 20:32:48.188809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:23:35.278 [2024-05-16 20:32:48.188813] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf780 length 0x10 lkey 0x182e00 00:23:35.278 [2024-05-16 20:32:48.188820] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182e00 00:23:35.278 [2024-05-16 20:32:48.188826] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:35.278 [2024-05-16 20:32:48.188846] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:35.278 [2024-05-16 20:32:48.188850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0008 p:0 m:0 dnr:0 00:23:35.278 [2024-05-16 20:32:48.188855] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7a8 length 0x10 lkey 0x182e00 00:23:35.278 [2024-05-16 20:32:48.188861] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182e00 00:23:35.279 [2024-05-16 20:32:48.188867] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:35.279 [2024-05-16 20:32:48.188882] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:35.279 [2024-05-16 20:32:48.188886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0009 p:0 m:0 dnr:0 00:23:35.279 [2024-05-16 20:32:48.188890] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7d0 length 0x10 lkey 0x182e00 00:23:35.279 [2024-05-16 20:32:48.188897] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182e00 00:23:35.279 [2024-05-16 20:32:48.188903] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:35.279 [2024-05-16 20:32:48.188919] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:35.279 [2024-05-16 20:32:48.188923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000a p:0 m:0 dnr:0 00:23:35.279 [2024-05-16 20:32:48.188927] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7f8 length 0x10 lkey 0x182e00 00:23:35.279 [2024-05-16 20:32:48.188934] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182e00 00:23:35.279 [2024-05-16 20:32:48.188940] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:35.279 [2024-05-16 20:32:48.188959] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:35.279 [2024-05-16 20:32:48.188963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000b p:0 m:0 dnr:0 00:23:35.279 [2024-05-16 20:32:48.188967] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf820 length 0x10 lkey 0x182e00 00:23:35.279 [2024-05-16 20:32:48.188974] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182e00 00:23:35.279 [2024-05-16 20:32:48.188980] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:35.279 [2024-05-16 20:32:48.189001] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:35.279 [2024-05-16 20:32:48.189005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000c p:0 m:0 dnr:0 00:23:35.279 [2024-05-16 20:32:48.189009] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf848 length 0x10 lkey 0x182e00 00:23:35.279 [2024-05-16 20:32:48.189016] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182e00 00:23:35.279 [2024-05-16 20:32:48.189021] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:35.279 [2024-05-16 20:32:48.189042] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:35.279 [2024-05-16 20:32:48.189046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000d p:0 m:0 dnr:0 00:23:35.279 [2024-05-16 20:32:48.189050] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf870 length 0x10 lkey 0x182e00 00:23:35.279 [2024-05-16 20:32:48.189057] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182e00 00:23:35.279 [2024-05-16 20:32:48.189063] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:35.279 [2024-05-16 20:32:48.189085] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:35.279 [2024-05-16 20:32:48.189089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000e p:0 m:0 dnr:0 00:23:35.279 [2024-05-16 20:32:48.189094] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf898 length 0x10 lkey 0x182e00 00:23:35.279 [2024-05-16 20:32:48.189100] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182e00 00:23:35.279 [2024-05-16 20:32:48.189106] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:35.279 [2024-05-16 20:32:48.189119] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:35.279 [2024-05-16 20:32:48.189123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000f p:0 m:0 dnr:0 00:23:35.279 [2024-05-16 20:32:48.189128] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8c0 length 0x10 lkey 0x182e00 00:23:35.279 [2024-05-16 20:32:48.189135] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182e00 00:23:35.279 [2024-05-16 20:32:48.189140] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:35.279 [2024-05-16 20:32:48.189161] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:35.279 [2024-05-16 20:32:48.189165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0010 p:0 m:0 dnr:0 00:23:35.279 [2024-05-16 20:32:48.189169] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8e8 length 0x10 lkey 0x182e00 00:23:35.279 [2024-05-16 20:32:48.189176] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182e00 00:23:35.279 [2024-05-16 20:32:48.189182] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:35.279 [2024-05-16 20:32:48.189204] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:35.279 [2024-05-16 20:32:48.189208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0011 p:0 m:0 dnr:0 00:23:35.279 [2024-05-16 20:32:48.189213] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf910 length 0x10 lkey 0x182e00 00:23:35.279 [2024-05-16 20:32:48.189219] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182e00 00:23:35.279 [2024-05-16 20:32:48.189225] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:35.279 [2024-05-16 20:32:48.189248] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:35.279 [2024-05-16 20:32:48.189252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0012 p:0 m:0 dnr:0 00:23:35.279 [2024-05-16 20:32:48.189256] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf938 length 0x10 lkey 0x182e00 00:23:35.279 [2024-05-16 20:32:48.189262] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182e00 00:23:35.279 [2024-05-16 20:32:48.189268] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:35.279 [2024-05-16 20:32:48.189291] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:35.279 [2024-05-16 20:32:48.189295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0013 p:0 m:0 dnr:0 00:23:35.279 [2024-05-16 20:32:48.189299] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf960 length 0x10 lkey 0x182e00 00:23:35.279 [2024-05-16 20:32:48.189305] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182e00 00:23:35.279 [2024-05-16 20:32:48.189311] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:35.279 [2024-05-16 20:32:48.189331] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:35.279 [2024-05-16 20:32:48.189335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0014 p:0 m:0 dnr:0 00:23:35.279 [2024-05-16 20:32:48.189339] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf988 length 0x10 lkey 0x182e00 00:23:35.279 [2024-05-16 20:32:48.189346] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182e00 00:23:35.279 [2024-05-16 20:32:48.189352] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:35.279 [2024-05-16 20:32:48.189372] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:35.279 [2024-05-16 20:32:48.189376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0015 p:0 m:0 dnr:0 00:23:35.279 [2024-05-16 20:32:48.189381] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9b0 length 0x10 lkey 0x182e00 00:23:35.279 [2024-05-16 20:32:48.189387] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182e00 00:23:35.279 [2024-05-16 20:32:48.189394] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:35.279 [2024-05-16 20:32:48.189414] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:35.279 [2024-05-16 20:32:48.189418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0016 p:0 m:0 dnr:0 00:23:35.279 [2024-05-16 20:32:48.193426] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9d8 length 0x10 lkey 0x182e00 00:23:35.279 [2024-05-16 20:32:48.193435] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182e00 00:23:35.279 [2024-05-16 20:32:48.193441] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:35.279 [2024-05-16 20:32:48.193458] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:35.279 [2024-05-16 20:32:48.193462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:9 sqhd:0017 p:0 m:0 dnr:0 00:23:35.279 [2024-05-16 20:32:48.193466] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa00 length 0x10 lkey 0x182e00 00:23:35.279 [2024-05-16 20:32:48.193471] nvme_ctrlr.c:1205:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 6 milliseconds 00:23:35.279 0 Kelvin (-273 Celsius) 00:23:35.279 Available Spare: 0% 00:23:35.279 Available Spare Threshold: 0% 00:23:35.279 Life Percentage Used: 0% 00:23:35.279 Data Units Read: 0 00:23:35.279 Data Units Written: 0 00:23:35.279 Host Read Commands: 0 00:23:35.279 Host Write Commands: 0 00:23:35.279 Controller Busy Time: 0 minutes 00:23:35.279 Power Cycles: 0 00:23:35.279 Power On Hours: 0 hours 00:23:35.279 Unsafe Shutdowns: 0 00:23:35.279 Unrecoverable Media Errors: 0 00:23:35.279 Lifetime Error Log Entries: 0 00:23:35.279 Warning Temperature Time: 0 minutes 00:23:35.279 Critical Temperature Time: 0 minutes 00:23:35.279 00:23:35.279 Number of Queues 00:23:35.279 ================ 00:23:35.279 Number of I/O Submission Queues: 127 00:23:35.279 Number of I/O Completion Queues: 127 00:23:35.279 00:23:35.279 Active Namespaces 00:23:35.279 ================= 00:23:35.279 Namespace ID:1 00:23:35.279 Error Recovery Timeout: Unlimited 00:23:35.279 Command Set Identifier: NVM (00h) 00:23:35.279 Deallocate: Supported 00:23:35.279 Deallocated/Unwritten Error: Not Supported 00:23:35.279 Deallocated Read Value: Unknown 00:23:35.279 Deallocate in Write Zeroes: Not Supported 00:23:35.279 Deallocated Guard Field: 0xFFFF 00:23:35.280 Flush: Supported 00:23:35.280 Reservation: Supported 00:23:35.280 Namespace Sharing Capabilities: Multiple Controllers 00:23:35.280 Size (in LBAs): 131072 (0GiB) 00:23:35.280 Capacity (in LBAs): 131072 (0GiB) 00:23:35.280 Utilization (in LBAs): 131072 (0GiB) 00:23:35.280 NGUID: ABCDEF0123456789ABCDEF0123456789 00:23:35.280 EUI64: ABCDEF0123456789 00:23:35.280 UUID: f2940c9d-313c-46b6-96e0-0d83e0fe146e 00:23:35.280 Thin Provisioning: Not Supported 00:23:35.280 Per-NS Atomic Units: Yes 00:23:35.280 Atomic Boundary Size (Normal): 0 00:23:35.280 Atomic Boundary Size (PFail): 0 00:23:35.280 Atomic Boundary Offset: 0 00:23:35.280 Maximum Single Source Range Length: 65535 00:23:35.280 Maximum Copy Length: 65535 00:23:35.280 Maximum Source Range Count: 1 00:23:35.280 NGUID/EUI64 Never Reused: No 00:23:35.280 Namespace Write Protected: No 00:23:35.280 Number of LBA Formats: 1 00:23:35.280 Current LBA Format: LBA Format #00 00:23:35.280 LBA Format #00: Data Size: 512 Metadata Size: 0 00:23:35.280 00:23:35.280 20:32:48 nvmf_rdma.nvmf_identify -- host/identify.sh@51 -- # sync 00:23:35.280 20:32:48 nvmf_rdma.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:35.280 20:32:48 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:35.280 20:32:48 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:35.538 20:32:48 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:35.538 20:32:48 nvmf_rdma.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:23:35.538 20:32:48 nvmf_rdma.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:23:35.538 20:32:48 nvmf_rdma.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:35.538 20:32:48 nvmf_rdma.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:23:35.538 20:32:48 nvmf_rdma.nvmf_identify -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:23:35.538 20:32:48 nvmf_rdma.nvmf_identify -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:23:35.538 20:32:48 nvmf_rdma.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:23:35.538 20:32:48 nvmf_rdma.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:35.538 20:32:48 nvmf_rdma.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:23:35.538 rmmod nvme_rdma 00:23:35.538 rmmod nvme_fabrics 00:23:35.538 20:32:48 nvmf_rdma.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:35.538 20:32:48 nvmf_rdma.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:23:35.538 20:32:48 nvmf_rdma.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:23:35.538 20:32:48 nvmf_rdma.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 3149758 ']' 00:23:35.538 20:32:48 nvmf_rdma.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 3149758 00:23:35.538 20:32:48 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@946 -- # '[' -z 3149758 ']' 00:23:35.538 20:32:48 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@950 -- # kill -0 3149758 00:23:35.538 20:32:48 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@951 -- # uname 00:23:35.538 20:32:48 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:35.538 20:32:48 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3149758 00:23:35.538 20:32:48 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:23:35.538 20:32:48 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:23:35.538 20:32:48 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3149758' 00:23:35.538 killing process with pid 3149758 00:23:35.538 20:32:48 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@965 -- # kill 3149758 00:23:35.538 [2024-05-16 20:32:48.350796] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:23:35.538 20:32:48 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@970 -- # wait 3149758 00:23:35.538 [2024-05-16 20:32:48.427317] rdma.c:2885:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:23:35.796 20:32:48 nvmf_rdma.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:35.796 20:32:48 nvmf_rdma.nvmf_identify -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:23:35.796 00:23:35.796 real 0m8.174s 00:23:35.796 user 0m8.145s 00:23:35.796 sys 0m5.165s 00:23:35.796 20:32:48 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:23:35.796 20:32:48 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:35.796 ************************************ 00:23:35.796 END TEST nvmf_identify 00:23:35.796 ************************************ 00:23:35.796 20:32:48 nvmf_rdma -- nvmf/nvmf.sh@97 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=rdma 00:23:35.796 20:32:48 nvmf_rdma -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:23:35.796 20:32:48 nvmf_rdma -- common/autotest_common.sh@1103 -- # xtrace_disable 00:23:35.796 20:32:48 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:23:35.796 ************************************ 00:23:35.796 START TEST nvmf_perf 00:23:35.796 ************************************ 00:23:35.796 20:32:48 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=rdma 00:23:35.796 * Looking for test storage... 00:23:35.796 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:23:35.796 20:32:48 nvmf_rdma.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:23:35.796 20:32:48 nvmf_rdma.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:23:35.796 20:32:48 nvmf_rdma.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:35.796 20:32:48 nvmf_rdma.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:35.796 20:32:48 nvmf_rdma.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:35.796 20:32:48 nvmf_rdma.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:35.796 20:32:48 nvmf_rdma.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:35.796 20:32:48 nvmf_rdma.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:35.796 20:32:48 nvmf_rdma.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:35.796 20:32:48 nvmf_rdma.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:35.796 20:32:48 nvmf_rdma.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:36.054 20:32:48 nvmf_rdma.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:36.054 20:32:48 nvmf_rdma.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:23:36.054 20:32:48 nvmf_rdma.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:23:36.054 20:32:48 nvmf_rdma.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:36.054 20:32:48 nvmf_rdma.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:36.054 20:32:48 nvmf_rdma.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:36.054 20:32:48 nvmf_rdma.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:36.055 20:32:48 nvmf_rdma.nvmf_perf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:23:36.055 20:32:48 nvmf_rdma.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:36.055 20:32:48 nvmf_rdma.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:36.055 20:32:48 nvmf_rdma.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:36.055 20:32:48 nvmf_rdma.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:36.055 20:32:48 nvmf_rdma.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:36.055 20:32:48 nvmf_rdma.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:36.055 20:32:48 nvmf_rdma.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:23:36.055 20:32:48 nvmf_rdma.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:36.055 20:32:48 nvmf_rdma.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:23:36.055 20:32:48 nvmf_rdma.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:36.055 20:32:48 nvmf_rdma.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:36.055 20:32:48 nvmf_rdma.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:36.055 20:32:48 nvmf_rdma.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:36.055 20:32:48 nvmf_rdma.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:36.055 20:32:48 nvmf_rdma.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:36.055 20:32:48 nvmf_rdma.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:36.055 20:32:48 nvmf_rdma.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:36.055 20:32:48 nvmf_rdma.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:23:36.055 20:32:48 nvmf_rdma.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:23:36.055 20:32:48 nvmf_rdma.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:23:36.055 20:32:48 nvmf_rdma.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:23:36.055 20:32:48 nvmf_rdma.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:23:36.055 20:32:48 nvmf_rdma.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:36.055 20:32:48 nvmf_rdma.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:36.055 20:32:48 nvmf_rdma.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:36.055 20:32:48 nvmf_rdma.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:36.055 20:32:48 nvmf_rdma.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:36.055 20:32:48 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:36.055 20:32:48 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:36.055 20:32:48 nvmf_rdma.nvmf_perf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:36.055 20:32:48 nvmf_rdma.nvmf_perf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:36.055 20:32:48 nvmf_rdma.nvmf_perf -- nvmf/common.sh@285 -- # xtrace_disable 00:23:36.055 20:32:48 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:42.607 20:32:54 nvmf_rdma.nvmf_perf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:42.607 20:32:54 nvmf_rdma.nvmf_perf -- nvmf/common.sh@291 -- # pci_devs=() 00:23:42.607 20:32:54 nvmf_rdma.nvmf_perf -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:42.607 20:32:54 nvmf_rdma.nvmf_perf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:42.607 20:32:54 nvmf_rdma.nvmf_perf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:42.607 20:32:54 nvmf_rdma.nvmf_perf -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:42.607 20:32:54 nvmf_rdma.nvmf_perf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:42.607 20:32:54 nvmf_rdma.nvmf_perf -- nvmf/common.sh@295 -- # net_devs=() 00:23:42.607 20:32:54 nvmf_rdma.nvmf_perf -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:42.607 20:32:54 nvmf_rdma.nvmf_perf -- nvmf/common.sh@296 -- # e810=() 00:23:42.607 20:32:54 nvmf_rdma.nvmf_perf -- nvmf/common.sh@296 -- # local -ga e810 00:23:42.607 20:32:54 nvmf_rdma.nvmf_perf -- nvmf/common.sh@297 -- # x722=() 00:23:42.607 20:32:54 nvmf_rdma.nvmf_perf -- nvmf/common.sh@297 -- # local -ga x722 00:23:42.607 20:32:54 nvmf_rdma.nvmf_perf -- nvmf/common.sh@298 -- # mlx=() 00:23:42.607 20:32:54 nvmf_rdma.nvmf_perf -- nvmf/common.sh@298 -- # local -ga mlx 00:23:42.607 20:32:54 nvmf_rdma.nvmf_perf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:42.607 20:32:54 nvmf_rdma.nvmf_perf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:42.607 20:32:54 nvmf_rdma.nvmf_perf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:42.607 20:32:54 nvmf_rdma.nvmf_perf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:42.607 20:32:54 nvmf_rdma.nvmf_perf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:42.607 20:32:54 nvmf_rdma.nvmf_perf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:42.607 20:32:54 nvmf_rdma.nvmf_perf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:42.607 20:32:54 nvmf_rdma.nvmf_perf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:42.607 20:32:54 nvmf_rdma.nvmf_perf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:42.607 20:32:54 nvmf_rdma.nvmf_perf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:42.607 20:32:54 nvmf_rdma.nvmf_perf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:42.607 20:32:54 nvmf_rdma.nvmf_perf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:42.607 20:32:54 nvmf_rdma.nvmf_perf -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:23:42.607 20:32:54 nvmf_rdma.nvmf_perf -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:23:42.607 20:32:54 nvmf_rdma.nvmf_perf -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:23:42.607 20:32:54 nvmf_rdma.nvmf_perf -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:23:42.607 20:32:54 nvmf_rdma.nvmf_perf -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:23:42.607 20:32:54 nvmf_rdma.nvmf_perf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:42.607 20:32:54 nvmf_rdma.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:42.607 20:32:54 nvmf_rdma.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:23:42.607 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:23:42.607 20:32:54 nvmf_rdma.nvmf_perf -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:23:42.607 20:32:54 nvmf_rdma.nvmf_perf -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:23:42.607 20:32:54 nvmf_rdma.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:23:42.607 20:32:54 nvmf_rdma.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:23:42.607 20:32:54 nvmf_rdma.nvmf_perf -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:23:42.607 20:32:54 nvmf_rdma.nvmf_perf -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:23:42.607 20:32:54 nvmf_rdma.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:42.607 20:32:54 nvmf_rdma.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:23:42.607 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:23:42.607 20:32:54 nvmf_rdma.nvmf_perf -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:23:42.607 20:32:54 nvmf_rdma.nvmf_perf -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:23:42.607 20:32:54 nvmf_rdma.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:23:42.607 20:32:54 nvmf_rdma.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:23:42.607 20:32:54 nvmf_rdma.nvmf_perf -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:23:42.607 20:32:54 nvmf_rdma.nvmf_perf -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:23:42.607 20:32:54 nvmf_rdma.nvmf_perf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:42.607 20:32:54 nvmf_rdma.nvmf_perf -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:23:42.607 20:32:54 nvmf_rdma.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:42.607 20:32:54 nvmf_rdma.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:42.607 20:32:54 nvmf_rdma.nvmf_perf -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:23:42.607 20:32:54 nvmf_rdma.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:42.607 20:32:54 nvmf_rdma.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:42.607 20:32:54 nvmf_rdma.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:23:42.607 Found net devices under 0000:da:00.0: mlx_0_0 00:23:42.607 20:32:54 nvmf_rdma.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:42.607 20:32:54 nvmf_rdma.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:42.607 20:32:54 nvmf_rdma.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:42.607 20:32:54 nvmf_rdma.nvmf_perf -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:23:42.607 20:32:54 nvmf_rdma.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:42.607 20:32:54 nvmf_rdma.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:42.607 20:32:54 nvmf_rdma.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:23:42.607 Found net devices under 0000:da:00.1: mlx_0_1 00:23:42.607 20:32:54 nvmf_rdma.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:42.607 20:32:54 nvmf_rdma.nvmf_perf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:42.607 20:32:54 nvmf_rdma.nvmf_perf -- nvmf/common.sh@414 -- # is_hw=yes 00:23:42.607 20:32:54 nvmf_rdma.nvmf_perf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:42.607 20:32:54 nvmf_rdma.nvmf_perf -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:23:42.607 20:32:54 nvmf_rdma.nvmf_perf -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:23:42.607 20:32:54 nvmf_rdma.nvmf_perf -- nvmf/common.sh@420 -- # rdma_device_init 00:23:42.607 20:32:54 nvmf_rdma.nvmf_perf -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:23:42.607 20:32:54 nvmf_rdma.nvmf_perf -- nvmf/common.sh@58 -- # uname 00:23:42.607 20:32:54 nvmf_rdma.nvmf_perf -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:23:42.607 20:32:54 nvmf_rdma.nvmf_perf -- nvmf/common.sh@62 -- # modprobe ib_cm 00:23:42.607 20:32:54 nvmf_rdma.nvmf_perf -- nvmf/common.sh@63 -- # modprobe ib_core 00:23:42.607 20:32:54 nvmf_rdma.nvmf_perf -- nvmf/common.sh@64 -- # modprobe ib_umad 00:23:42.607 20:32:54 nvmf_rdma.nvmf_perf -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:23:42.607 20:32:54 nvmf_rdma.nvmf_perf -- nvmf/common.sh@66 -- # modprobe iw_cm 00:23:42.607 20:32:54 nvmf_rdma.nvmf_perf -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:23:42.607 20:32:54 nvmf_rdma.nvmf_perf -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:23:42.607 20:32:54 nvmf_rdma.nvmf_perf -- nvmf/common.sh@502 -- # allocate_nic_ips 00:23:42.607 20:32:54 nvmf_rdma.nvmf_perf -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:23:42.607 20:32:54 nvmf_rdma.nvmf_perf -- nvmf/common.sh@73 -- # get_rdma_if_list 00:23:42.607 20:32:54 nvmf_rdma.nvmf_perf -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:23:42.607 20:32:54 nvmf_rdma.nvmf_perf -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:23:42.607 20:32:54 nvmf_rdma.nvmf_perf -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:23:42.607 20:32:54 nvmf_rdma.nvmf_perf -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:23:42.607 20:32:54 nvmf_rdma.nvmf_perf -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:23:42.607 20:32:54 nvmf_rdma.nvmf_perf -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:23:42.607 20:32:54 nvmf_rdma.nvmf_perf -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:42.607 20:32:54 nvmf_rdma.nvmf_perf -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:23:42.607 20:32:54 nvmf_rdma.nvmf_perf -- nvmf/common.sh@104 -- # echo mlx_0_0 00:23:42.607 20:32:54 nvmf_rdma.nvmf_perf -- nvmf/common.sh@105 -- # continue 2 00:23:42.607 20:32:54 nvmf_rdma.nvmf_perf -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:23:42.607 20:32:54 nvmf_rdma.nvmf_perf -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:42.607 20:32:54 nvmf_rdma.nvmf_perf -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:23:42.607 20:32:54 nvmf_rdma.nvmf_perf -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:42.607 20:32:54 nvmf_rdma.nvmf_perf -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:23:42.607 20:32:54 nvmf_rdma.nvmf_perf -- nvmf/common.sh@104 -- # echo mlx_0_1 00:23:42.607 20:32:54 nvmf_rdma.nvmf_perf -- nvmf/common.sh@105 -- # continue 2 00:23:42.607 20:32:54 nvmf_rdma.nvmf_perf -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:23:42.607 20:32:54 nvmf_rdma.nvmf_perf -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:23:42.607 20:32:54 nvmf_rdma.nvmf_perf -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:23:42.607 20:32:54 nvmf_rdma.nvmf_perf -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:23:42.607 20:32:54 nvmf_rdma.nvmf_perf -- nvmf/common.sh@113 -- # awk '{print $4}' 00:23:42.607 20:32:54 nvmf_rdma.nvmf_perf -- nvmf/common.sh@113 -- # cut -d/ -f1 00:23:42.607 20:32:54 nvmf_rdma.nvmf_perf -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:23:42.607 20:32:54 nvmf_rdma.nvmf_perf -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:23:42.607 20:32:54 nvmf_rdma.nvmf_perf -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:23:42.607 262: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:23:42.607 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:23:42.607 altname enp218s0f0np0 00:23:42.607 altname ens818f0np0 00:23:42.608 inet 192.168.100.8/24 scope global mlx_0_0 00:23:42.608 valid_lft forever preferred_lft forever 00:23:42.608 20:32:54 nvmf_rdma.nvmf_perf -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:23:42.608 20:32:54 nvmf_rdma.nvmf_perf -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:23:42.608 20:32:54 nvmf_rdma.nvmf_perf -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:23:42.608 20:32:54 nvmf_rdma.nvmf_perf -- nvmf/common.sh@113 -- # awk '{print $4}' 00:23:42.608 20:32:54 nvmf_rdma.nvmf_perf -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:23:42.608 20:32:54 nvmf_rdma.nvmf_perf -- nvmf/common.sh@113 -- # cut -d/ -f1 00:23:42.608 20:32:54 nvmf_rdma.nvmf_perf -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:23:42.608 20:32:54 nvmf_rdma.nvmf_perf -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:23:42.608 20:32:54 nvmf_rdma.nvmf_perf -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:23:42.608 263: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:23:42.608 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:23:42.608 altname enp218s0f1np1 00:23:42.608 altname ens818f1np1 00:23:42.608 inet 192.168.100.9/24 scope global mlx_0_1 00:23:42.608 valid_lft forever preferred_lft forever 00:23:42.608 20:32:54 nvmf_rdma.nvmf_perf -- nvmf/common.sh@422 -- # return 0 00:23:42.608 20:32:54 nvmf_rdma.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:42.608 20:32:54 nvmf_rdma.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:23:42.608 20:32:54 nvmf_rdma.nvmf_perf -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:23:42.608 20:32:54 nvmf_rdma.nvmf_perf -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:23:42.608 20:32:54 nvmf_rdma.nvmf_perf -- nvmf/common.sh@86 -- # get_rdma_if_list 00:23:42.608 20:32:54 nvmf_rdma.nvmf_perf -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:23:42.608 20:32:54 nvmf_rdma.nvmf_perf -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:23:42.608 20:32:54 nvmf_rdma.nvmf_perf -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:23:42.608 20:32:54 nvmf_rdma.nvmf_perf -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:23:42.608 20:32:54 nvmf_rdma.nvmf_perf -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:23:42.608 20:32:54 nvmf_rdma.nvmf_perf -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:23:42.608 20:32:54 nvmf_rdma.nvmf_perf -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:42.608 20:32:54 nvmf_rdma.nvmf_perf -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:23:42.608 20:32:54 nvmf_rdma.nvmf_perf -- nvmf/common.sh@104 -- # echo mlx_0_0 00:23:42.608 20:32:54 nvmf_rdma.nvmf_perf -- nvmf/common.sh@105 -- # continue 2 00:23:42.608 20:32:54 nvmf_rdma.nvmf_perf -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:23:42.608 20:32:54 nvmf_rdma.nvmf_perf -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:42.608 20:32:54 nvmf_rdma.nvmf_perf -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:23:42.608 20:32:54 nvmf_rdma.nvmf_perf -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:42.608 20:32:54 nvmf_rdma.nvmf_perf -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:23:42.608 20:32:54 nvmf_rdma.nvmf_perf -- nvmf/common.sh@104 -- # echo mlx_0_1 00:23:42.608 20:32:54 nvmf_rdma.nvmf_perf -- nvmf/common.sh@105 -- # continue 2 00:23:42.608 20:32:54 nvmf_rdma.nvmf_perf -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:23:42.608 20:32:54 nvmf_rdma.nvmf_perf -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:23:42.608 20:32:54 nvmf_rdma.nvmf_perf -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:23:42.608 20:32:54 nvmf_rdma.nvmf_perf -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:23:42.608 20:32:54 nvmf_rdma.nvmf_perf -- nvmf/common.sh@113 -- # awk '{print $4}' 00:23:42.608 20:32:54 nvmf_rdma.nvmf_perf -- nvmf/common.sh@113 -- # cut -d/ -f1 00:23:42.608 20:32:54 nvmf_rdma.nvmf_perf -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:23:42.608 20:32:54 nvmf_rdma.nvmf_perf -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:23:42.608 20:32:54 nvmf_rdma.nvmf_perf -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:23:42.608 20:32:54 nvmf_rdma.nvmf_perf -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:23:42.608 20:32:54 nvmf_rdma.nvmf_perf -- nvmf/common.sh@113 -- # awk '{print $4}' 00:23:42.608 20:32:54 nvmf_rdma.nvmf_perf -- nvmf/common.sh@113 -- # cut -d/ -f1 00:23:42.608 20:32:54 nvmf_rdma.nvmf_perf -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:23:42.608 192.168.100.9' 00:23:42.608 20:32:54 nvmf_rdma.nvmf_perf -- nvmf/common.sh@457 -- # head -n 1 00:23:42.608 20:32:54 nvmf_rdma.nvmf_perf -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:23:42.608 192.168.100.9' 00:23:42.608 20:32:54 nvmf_rdma.nvmf_perf -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:23:42.608 20:32:54 nvmf_rdma.nvmf_perf -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:23:42.608 192.168.100.9' 00:23:42.608 20:32:54 nvmf_rdma.nvmf_perf -- nvmf/common.sh@458 -- # tail -n +2 00:23:42.608 20:32:54 nvmf_rdma.nvmf_perf -- nvmf/common.sh@458 -- # head -n 1 00:23:42.608 20:32:54 nvmf_rdma.nvmf_perf -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:23:42.608 20:32:54 nvmf_rdma.nvmf_perf -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:23:42.608 20:32:54 nvmf_rdma.nvmf_perf -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:23:42.608 20:32:54 nvmf_rdma.nvmf_perf -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:23:42.608 20:32:54 nvmf_rdma.nvmf_perf -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:23:42.608 20:32:54 nvmf_rdma.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:23:42.608 20:32:54 nvmf_rdma.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:23:42.608 20:32:54 nvmf_rdma.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:42.608 20:32:54 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@720 -- # xtrace_disable 00:23:42.608 20:32:54 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:42.608 20:32:54 nvmf_rdma.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=3153579 00:23:42.608 20:32:54 nvmf_rdma.nvmf_perf -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:42.608 20:32:54 nvmf_rdma.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 3153579 00:23:42.608 20:32:54 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@827 -- # '[' -z 3153579 ']' 00:23:42.608 20:32:54 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:42.608 20:32:54 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:42.608 20:32:54 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:42.608 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:42.608 20:32:54 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:42.608 20:32:54 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:42.608 [2024-05-16 20:32:54.785131] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:23:42.608 [2024-05-16 20:32:54.785182] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:42.608 EAL: No free 2048 kB hugepages reported on node 1 00:23:42.608 [2024-05-16 20:32:54.847740] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:42.608 [2024-05-16 20:32:54.920686] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:42.608 [2024-05-16 20:32:54.920726] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:42.608 [2024-05-16 20:32:54.920733] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:42.608 [2024-05-16 20:32:54.920739] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:42.608 [2024-05-16 20:32:54.920743] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:42.608 [2024-05-16 20:32:54.920806] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:42.608 [2024-05-16 20:32:54.920900] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:42.608 [2024-05-16 20:32:54.920970] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:23:42.608 [2024-05-16 20:32:54.920971] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:42.608 20:32:55 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:42.865 20:32:55 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@860 -- # return 0 00:23:42.865 20:32:55 nvmf_rdma.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:42.865 20:32:55 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:42.865 20:32:55 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:42.865 20:32:55 nvmf_rdma.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:42.865 20:32:55 nvmf_rdma.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh 00:23:42.865 20:32:55 nvmf_rdma.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:23:46.137 20:32:58 nvmf_rdma.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:23:46.137 20:32:58 nvmf_rdma.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:23:46.137 20:32:58 nvmf_rdma.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:5f:00.0 00:23:46.137 20:32:58 nvmf_rdma.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:23:46.137 20:32:59 nvmf_rdma.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:23:46.137 20:32:59 nvmf_rdma.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:5f:00.0 ']' 00:23:46.137 20:32:59 nvmf_rdma.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:23:46.137 20:32:59 nvmf_rdma.nvmf_perf -- host/perf.sh@37 -- # '[' rdma == rdma ']' 00:23:46.137 20:32:59 nvmf_rdma.nvmf_perf -- host/perf.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -c 0 00:23:46.394 [2024-05-16 20:32:59.201216] rdma.c:2726:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:23:46.394 [2024-05-16 20:32:59.222447] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xcc5e00/0xcf3a00) succeed. 00:23:46.394 [2024-05-16 20:32:59.232963] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xcc7440/0xd53a00) succeed. 00:23:46.394 20:32:59 nvmf_rdma.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:46.650 20:32:59 nvmf_rdma.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:23:46.650 20:32:59 nvmf_rdma.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:46.907 20:32:59 nvmf_rdma.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:23:46.907 20:32:59 nvmf_rdma.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:23:47.165 20:32:59 nvmf_rdma.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:23:47.165 [2024-05-16 20:33:00.081082] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:23:47.165 [2024-05-16 20:33:00.081500] rdma.c:3032:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:23:47.165 20:33:00 nvmf_rdma.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:23:47.422 20:33:00 nvmf_rdma.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:5f:00.0 ']' 00:23:47.422 20:33:00 nvmf_rdma.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5f:00.0' 00:23:47.422 20:33:00 nvmf_rdma.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:23:47.422 20:33:00 nvmf_rdma.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5f:00.0' 00:23:48.789 Initializing NVMe Controllers 00:23:48.789 Attached to NVMe Controller at 0000:5f:00.0 [8086:0a54] 00:23:48.789 Associating PCIE (0000:5f:00.0) NSID 1 with lcore 0 00:23:48.789 Initialization complete. Launching workers. 00:23:48.789 ======================================================== 00:23:48.789 Latency(us) 00:23:48.789 Device Information : IOPS MiB/s Average min max 00:23:48.789 PCIE (0000:5f:00.0) NSID 1 from core 0: 99185.80 387.44 322.24 33.62 6819.56 00:23:48.789 ======================================================== 00:23:48.789 Total : 99185.80 387.44 322.24 33.62 6819.56 00:23:48.789 00:23:48.789 20:33:01 nvmf_rdma.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:23:48.789 EAL: No free 2048 kB hugepages reported on node 1 00:23:52.060 Initializing NVMe Controllers 00:23:52.060 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:23:52.060 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:52.060 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:52.060 Initialization complete. Launching workers. 00:23:52.060 ======================================================== 00:23:52.060 Latency(us) 00:23:52.060 Device Information : IOPS MiB/s Average min max 00:23:52.060 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 6587.99 25.73 151.58 50.11 4197.90 00:23:52.060 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 5179.99 20.23 192.85 68.02 4250.08 00:23:52.060 ======================================================== 00:23:52.060 Total : 11767.99 45.97 169.75 50.11 4250.08 00:23:52.060 00:23:52.060 20:33:04 nvmf_rdma.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:23:52.060 EAL: No free 2048 kB hugepages reported on node 1 00:23:55.340 Initializing NVMe Controllers 00:23:55.340 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:23:55.340 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:55.340 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:55.340 Initialization complete. Launching workers. 00:23:55.340 ======================================================== 00:23:55.340 Latency(us) 00:23:55.340 Device Information : IOPS MiB/s Average min max 00:23:55.340 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 17976.93 70.22 1779.59 504.06 5535.58 00:23:55.340 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 4081.00 15.94 7901.19 4060.19 11005.38 00:23:55.340 ======================================================== 00:23:55.340 Total : 22057.94 86.16 2912.17 504.06 11005.38 00:23:55.340 00:23:55.340 20:33:08 nvmf_rdma.nvmf_perf -- host/perf.sh@59 -- # [[ mlx5 == \e\8\1\0 ]] 00:23:55.340 20:33:08 nvmf_rdma.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:23:55.597 EAL: No free 2048 kB hugepages reported on node 1 00:23:59.772 Initializing NVMe Controllers 00:23:59.772 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:23:59.772 Controller IO queue size 128, less than required. 00:23:59.772 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:59.772 Controller IO queue size 128, less than required. 00:23:59.772 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:59.772 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:59.772 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:59.772 Initialization complete. Launching workers. 00:23:59.772 ======================================================== 00:23:59.772 Latency(us) 00:23:59.772 Device Information : IOPS MiB/s Average min max 00:23:59.772 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3438.42 859.60 37413.15 15490.02 85765.56 00:23:59.772 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3551.88 887.97 35761.27 15954.44 57868.93 00:23:59.772 ======================================================== 00:23:59.772 Total : 6990.30 1747.57 36573.80 15490.02 85765.56 00:23:59.772 00:23:59.772 20:33:12 nvmf_rdma.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -c 0xf -P 4 00:23:59.772 EAL: No free 2048 kB hugepages reported on node 1 00:24:00.028 No valid NVMe controllers or AIO or URING devices found 00:24:00.028 Initializing NVMe Controllers 00:24:00.028 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:24:00.028 Controller IO queue size 128, less than required. 00:24:00.028 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:00.028 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:24:00.028 Controller IO queue size 128, less than required. 00:24:00.028 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:00.028 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:24:00.028 WARNING: Some requested NVMe devices were skipped 00:24:00.028 20:33:13 nvmf_rdma.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' --transport-stat 00:24:00.285 EAL: No free 2048 kB hugepages reported on node 1 00:24:04.450 Initializing NVMe Controllers 00:24:04.450 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:24:04.451 Controller IO queue size 128, less than required. 00:24:04.451 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:04.451 Controller IO queue size 128, less than required. 00:24:04.451 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:04.451 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:04.451 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:04.451 Initialization complete. Launching workers. 00:24:04.451 00:24:04.451 ==================== 00:24:04.451 lcore 0, ns RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:24:04.451 RDMA transport: 00:24:04.451 dev name: mlx5_0 00:24:04.451 polls: 399658 00:24:04.451 idle_polls: 396298 00:24:04.451 completions: 43918 00:24:04.451 queued_requests: 1 00:24:04.451 total_send_wrs: 21959 00:24:04.451 send_doorbell_updates: 3081 00:24:04.451 total_recv_wrs: 22086 00:24:04.451 recv_doorbell_updates: 3083 00:24:04.451 --------------------------------- 00:24:04.451 00:24:04.451 ==================== 00:24:04.451 lcore 0, ns RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:24:04.451 RDMA transport: 00:24:04.451 dev name: mlx5_0 00:24:04.451 polls: 404976 00:24:04.451 idle_polls: 404710 00:24:04.451 completions: 20014 00:24:04.451 queued_requests: 1 00:24:04.451 total_send_wrs: 10007 00:24:04.451 send_doorbell_updates: 251 00:24:04.451 total_recv_wrs: 10134 00:24:04.451 recv_doorbell_updates: 252 00:24:04.451 --------------------------------- 00:24:04.451 ======================================================== 00:24:04.451 Latency(us) 00:24:04.451 Device Information : IOPS MiB/s Average min max 00:24:04.451 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 5489.50 1372.37 23375.52 11447.43 56171.87 00:24:04.451 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 2501.50 625.37 51290.38 31085.05 76494.38 00:24:04.451 ======================================================== 00:24:04.451 Total : 7991.00 1997.75 32113.98 11447.43 76494.38 00:24:04.451 00:24:04.451 20:33:17 nvmf_rdma.nvmf_perf -- host/perf.sh@66 -- # sync 00:24:04.451 20:33:17 nvmf_rdma.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:04.707 20:33:17 nvmf_rdma.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:24:04.707 20:33:17 nvmf_rdma.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:24:04.707 20:33:17 nvmf_rdma.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:24:04.707 20:33:17 nvmf_rdma.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:04.707 20:33:17 nvmf_rdma.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:24:04.707 20:33:17 nvmf_rdma.nvmf_perf -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:24:04.707 20:33:17 nvmf_rdma.nvmf_perf -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:24:04.707 20:33:17 nvmf_rdma.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:24:04.707 20:33:17 nvmf_rdma.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:04.707 20:33:17 nvmf_rdma.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:24:04.707 rmmod nvme_rdma 00:24:04.707 rmmod nvme_fabrics 00:24:04.707 20:33:17 nvmf_rdma.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:04.707 20:33:17 nvmf_rdma.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:24:04.707 20:33:17 nvmf_rdma.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:24:04.707 20:33:17 nvmf_rdma.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 3153579 ']' 00:24:04.707 20:33:17 nvmf_rdma.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 3153579 00:24:04.707 20:33:17 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@946 -- # '[' -z 3153579 ']' 00:24:04.707 20:33:17 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@950 -- # kill -0 3153579 00:24:04.707 20:33:17 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@951 -- # uname 00:24:04.707 20:33:17 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:24:04.707 20:33:17 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3153579 00:24:04.707 20:33:17 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:24:04.707 20:33:17 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:24:04.707 20:33:17 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3153579' 00:24:04.707 killing process with pid 3153579 00:24:04.707 20:33:17 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@965 -- # kill 3153579 00:24:04.707 [2024-05-16 20:33:17.628931] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:24:04.707 20:33:17 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@970 -- # wait 3153579 00:24:04.707 [2024-05-16 20:33:17.685074] rdma.c:2885:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:24:07.256 20:33:19 nvmf_rdma.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:07.256 20:33:19 nvmf_rdma.nvmf_perf -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:24:07.256 00:24:07.256 real 0m31.092s 00:24:07.256 user 1m41.623s 00:24:07.256 sys 0m5.527s 00:24:07.256 20:33:19 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:24:07.256 20:33:19 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:07.256 ************************************ 00:24:07.256 END TEST nvmf_perf 00:24:07.256 ************************************ 00:24:07.256 20:33:19 nvmf_rdma -- nvmf/nvmf.sh@98 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=rdma 00:24:07.256 20:33:19 nvmf_rdma -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:24:07.256 20:33:19 nvmf_rdma -- common/autotest_common.sh@1103 -- # xtrace_disable 00:24:07.256 20:33:19 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:24:07.256 ************************************ 00:24:07.256 START TEST nvmf_fio_host 00:24:07.256 ************************************ 00:24:07.256 20:33:19 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=rdma 00:24:07.256 * Looking for test storage... 00:24:07.256 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:24:07.256 20:33:19 nvmf_rdma.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:24:07.256 20:33:19 nvmf_rdma.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:07.256 20:33:19 nvmf_rdma.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:07.256 20:33:19 nvmf_rdma.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:07.256 20:33:19 nvmf_rdma.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:07.256 20:33:19 nvmf_rdma.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:07.256 20:33:19 nvmf_rdma.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:07.256 20:33:19 nvmf_rdma.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:24:07.256 20:33:19 nvmf_rdma.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:07.256 20:33:19 nvmf_rdma.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:24:07.256 20:33:19 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:24:07.256 20:33:19 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:07.256 20:33:19 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:07.256 20:33:19 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:07.256 20:33:19 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:07.256 20:33:19 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:07.256 20:33:19 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:07.256 20:33:19 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:07.256 20:33:19 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:07.256 20:33:19 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:07.256 20:33:19 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:07.256 20:33:19 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:24:07.256 20:33:19 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:24:07.256 20:33:19 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:07.256 20:33:19 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:07.256 20:33:19 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:07.256 20:33:19 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:07.256 20:33:19 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:24:07.256 20:33:19 nvmf_rdma.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:07.256 20:33:19 nvmf_rdma.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:07.256 20:33:19 nvmf_rdma.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:07.256 20:33:19 nvmf_rdma.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:07.256 20:33:19 nvmf_rdma.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:07.256 20:33:19 nvmf_rdma.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:07.256 20:33:19 nvmf_rdma.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:24:07.256 20:33:19 nvmf_rdma.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:07.256 20:33:19 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:24:07.256 20:33:19 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:07.256 20:33:19 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:07.256 20:33:19 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:07.256 20:33:19 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:07.256 20:33:19 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:07.256 20:33:19 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:07.256 20:33:19 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:07.256 20:33:19 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:07.256 20:33:19 nvmf_rdma.nvmf_fio_host -- host/fio.sh@12 -- # nvmftestinit 00:24:07.256 20:33:19 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:24:07.256 20:33:19 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:07.256 20:33:19 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:07.256 20:33:19 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:07.256 20:33:19 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:07.256 20:33:19 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:07.256 20:33:19 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:07.256 20:33:19 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:07.257 20:33:19 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:07.257 20:33:19 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:07.257 20:33:19 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@285 -- # xtrace_disable 00:24:07.257 20:33:19 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.842 20:33:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:13.842 20:33:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@291 -- # pci_devs=() 00:24:13.842 20:33:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:13.842 20:33:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:13.842 20:33:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:13.842 20:33:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:13.842 20:33:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:13.842 20:33:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@295 -- # net_devs=() 00:24:13.842 20:33:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:13.842 20:33:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@296 -- # e810=() 00:24:13.842 20:33:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@296 -- # local -ga e810 00:24:13.842 20:33:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@297 -- # x722=() 00:24:13.842 20:33:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@297 -- # local -ga x722 00:24:13.842 20:33:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@298 -- # mlx=() 00:24:13.842 20:33:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@298 -- # local -ga mlx 00:24:13.842 20:33:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:13.842 20:33:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:13.842 20:33:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:13.842 20:33:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:13.842 20:33:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:13.842 20:33:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:13.842 20:33:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:13.842 20:33:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:13.842 20:33:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:13.842 20:33:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:13.842 20:33:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:13.842 20:33:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:13.842 20:33:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:24:13.842 20:33:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:24:13.842 20:33:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:24:13.842 20:33:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:24:13.842 20:33:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:24:13.842 20:33:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:13.842 20:33:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:13.842 20:33:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:24:13.842 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:24:13.842 20:33:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:24:13.842 20:33:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:24:13.842 20:33:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:24:13.842 20:33:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:24:13.842 20:33:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:24:13.842 20:33:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:24:13.842 20:33:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:13.842 20:33:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:24:13.842 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:24:13.842 20:33:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:24:13.842 20:33:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:24:13.842 20:33:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:24:13.842 20:33:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:24:13.842 20:33:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:24:13.842 20:33:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:24:13.842 20:33:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:13.842 20:33:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:24:13.842 20:33:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:13.842 20:33:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:13.842 20:33:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:24:13.842 20:33:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:13.842 20:33:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:13.842 20:33:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:24:13.842 Found net devices under 0000:da:00.0: mlx_0_0 00:24:13.842 20:33:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:13.842 20:33:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:13.842 20:33:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:13.842 20:33:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:24:13.842 20:33:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:13.842 20:33:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:13.842 20:33:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:24:13.842 Found net devices under 0000:da:00.1: mlx_0_1 00:24:13.842 20:33:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:13.842 20:33:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:13.842 20:33:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@414 -- # is_hw=yes 00:24:13.842 20:33:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:13.842 20:33:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:24:13.842 20:33:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:24:13.842 20:33:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@420 -- # rdma_device_init 00:24:13.842 20:33:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:24:13.842 20:33:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@58 -- # uname 00:24:13.842 20:33:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:24:13.842 20:33:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@62 -- # modprobe ib_cm 00:24:13.842 20:33:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@63 -- # modprobe ib_core 00:24:13.842 20:33:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@64 -- # modprobe ib_umad 00:24:13.842 20:33:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:24:13.842 20:33:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@66 -- # modprobe iw_cm 00:24:13.842 20:33:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:24:13.842 20:33:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:24:13.842 20:33:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@502 -- # allocate_nic_ips 00:24:13.842 20:33:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:24:13.842 20:33:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@73 -- # get_rdma_if_list 00:24:13.842 20:33:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:24:13.842 20:33:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:24:13.842 20:33:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:24:13.842 20:33:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:24:13.842 20:33:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:24:13.842 20:33:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:24:13.842 20:33:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:13.842 20:33:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:24:13.843 20:33:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@104 -- # echo mlx_0_0 00:24:13.843 20:33:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@105 -- # continue 2 00:24:13.843 20:33:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:24:13.843 20:33:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:13.843 20:33:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:24:13.843 20:33:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:13.843 20:33:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:24:13.843 20:33:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@104 -- # echo mlx_0_1 00:24:13.843 20:33:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@105 -- # continue 2 00:24:13.843 20:33:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:24:13.843 20:33:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:24:13.843 20:33:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:24:13.843 20:33:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:24:13.843 20:33:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@113 -- # awk '{print $4}' 00:24:13.843 20:33:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@113 -- # cut -d/ -f1 00:24:13.843 20:33:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:24:13.843 20:33:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:24:13.843 20:33:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:24:13.843 262: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:24:13.843 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:24:13.843 altname enp218s0f0np0 00:24:13.843 altname ens818f0np0 00:24:13.843 inet 192.168.100.8/24 scope global mlx_0_0 00:24:13.843 valid_lft forever preferred_lft forever 00:24:13.843 20:33:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:24:13.843 20:33:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:24:13.843 20:33:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:24:13.843 20:33:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:24:13.843 20:33:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@113 -- # awk '{print $4}' 00:24:13.843 20:33:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@113 -- # cut -d/ -f1 00:24:13.843 20:33:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:24:13.843 20:33:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:24:13.843 20:33:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:24:13.843 263: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:24:13.843 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:24:13.843 altname enp218s0f1np1 00:24:13.843 altname ens818f1np1 00:24:13.843 inet 192.168.100.9/24 scope global mlx_0_1 00:24:13.843 valid_lft forever preferred_lft forever 00:24:13.843 20:33:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@422 -- # return 0 00:24:13.843 20:33:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:13.843 20:33:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:24:13.843 20:33:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:24:13.843 20:33:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:24:13.843 20:33:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@86 -- # get_rdma_if_list 00:24:13.843 20:33:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:24:13.843 20:33:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:24:13.843 20:33:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:24:13.843 20:33:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:24:13.843 20:33:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:24:13.843 20:33:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:24:13.843 20:33:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:13.843 20:33:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:24:13.843 20:33:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@104 -- # echo mlx_0_0 00:24:13.843 20:33:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@105 -- # continue 2 00:24:13.843 20:33:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:24:13.843 20:33:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:13.843 20:33:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:24:13.843 20:33:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:13.843 20:33:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:24:13.843 20:33:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@104 -- # echo mlx_0_1 00:24:13.843 20:33:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@105 -- # continue 2 00:24:13.843 20:33:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:24:13.843 20:33:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:24:13.843 20:33:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:24:13.843 20:33:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:24:13.843 20:33:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@113 -- # awk '{print $4}' 00:24:13.843 20:33:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@113 -- # cut -d/ -f1 00:24:13.843 20:33:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:24:13.843 20:33:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:24:13.843 20:33:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:24:13.843 20:33:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:24:13.843 20:33:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@113 -- # awk '{print $4}' 00:24:13.843 20:33:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@113 -- # cut -d/ -f1 00:24:13.843 20:33:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:24:13.843 192.168.100.9' 00:24:13.843 20:33:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:24:13.843 192.168.100.9' 00:24:13.843 20:33:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@457 -- # head -n 1 00:24:13.843 20:33:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:24:13.843 20:33:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:24:13.843 192.168.100.9' 00:24:13.843 20:33:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@458 -- # tail -n +2 00:24:13.843 20:33:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@458 -- # head -n 1 00:24:13.843 20:33:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:24:13.843 20:33:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:24:13.843 20:33:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:24:13.843 20:33:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:24:13.843 20:33:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:24:13.843 20:33:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:24:13.843 20:33:25 nvmf_rdma.nvmf_fio_host -- host/fio.sh@14 -- # [[ y != y ]] 00:24:13.843 20:33:25 nvmf_rdma.nvmf_fio_host -- host/fio.sh@19 -- # timing_enter start_nvmf_tgt 00:24:13.843 20:33:25 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@720 -- # xtrace_disable 00:24:13.843 20:33:25 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.843 20:33:25 nvmf_rdma.nvmf_fio_host -- host/fio.sh@22 -- # nvmfpid=3161615 00:24:13.843 20:33:25 nvmf_rdma.nvmf_fio_host -- host/fio.sh@24 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:13.843 20:33:25 nvmf_rdma.nvmf_fio_host -- host/fio.sh@26 -- # waitforlisten 3161615 00:24:13.843 20:33:25 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@827 -- # '[' -z 3161615 ']' 00:24:13.843 20:33:25 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:13.843 20:33:25 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:13.843 20:33:25 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:13.843 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:13.843 20:33:25 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:13.843 20:33:25 nvmf_rdma.nvmf_fio_host -- host/fio.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:13.843 20:33:25 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.843 [2024-05-16 20:33:25.894725] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:24:13.843 [2024-05-16 20:33:25.894767] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:13.843 EAL: No free 2048 kB hugepages reported on node 1 00:24:13.843 [2024-05-16 20:33:25.954148] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:13.843 [2024-05-16 20:33:26.034258] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:13.843 [2024-05-16 20:33:26.034295] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:13.843 [2024-05-16 20:33:26.034302] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:13.843 [2024-05-16 20:33:26.034308] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:13.843 [2024-05-16 20:33:26.034313] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:13.843 [2024-05-16 20:33:26.034352] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:13.843 [2024-05-16 20:33:26.034373] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:13.843 [2024-05-16 20:33:26.034472] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:13.843 [2024-05-16 20:33:26.034473] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:13.843 20:33:26 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:13.843 20:33:26 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@860 -- # return 0 00:24:13.843 20:33:26 nvmf_rdma.nvmf_fio_host -- host/fio.sh@27 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:24:13.843 20:33:26 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:13.843 20:33:26 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.843 [2024-05-16 20:33:26.709032] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x156c9b0/0x1570ea0) succeed. 00:24:13.843 [2024-05-16 20:33:26.719473] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x156dff0/0x15b2530) succeed. 00:24:14.102 20:33:26 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:14.102 20:33:26 nvmf_rdma.nvmf_fio_host -- host/fio.sh@28 -- # timing_exit start_nvmf_tgt 00:24:14.102 20:33:26 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:14.102 20:33:26 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.103 20:33:26 nvmf_rdma.nvmf_fio_host -- host/fio.sh@30 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:24:14.103 20:33:26 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:14.103 20:33:26 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.103 Malloc1 00:24:14.103 20:33:26 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:14.103 20:33:26 nvmf_rdma.nvmf_fio_host -- host/fio.sh@31 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:14.103 20:33:26 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:14.103 20:33:26 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.103 20:33:26 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:14.103 20:33:26 nvmf_rdma.nvmf_fio_host -- host/fio.sh@32 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:24:14.103 20:33:26 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:14.103 20:33:26 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.103 20:33:26 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:14.103 20:33:26 nvmf_rdma.nvmf_fio_host -- host/fio.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:24:14.103 20:33:26 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:14.103 20:33:26 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.103 [2024-05-16 20:33:26.923790] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:24:14.103 [2024-05-16 20:33:26.924180] rdma.c:3032:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:24:14.103 20:33:26 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:14.103 20:33:26 nvmf_rdma.nvmf_fio_host -- host/fio.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:24:14.103 20:33:26 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:14.103 20:33:26 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.103 20:33:26 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:14.103 20:33:26 nvmf_rdma.nvmf_fio_host -- host/fio.sh@36 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme 00:24:14.103 20:33:26 nvmf_rdma.nvmf_fio_host -- host/fio.sh@39 -- # fio_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:24:14.103 20:33:26 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:24:14.103 20:33:26 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:24:14.103 20:33:26 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:14.103 20:33:26 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1335 -- # local sanitizers 00:24:14.103 20:33:26 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:24:14.103 20:33:26 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1337 -- # shift 00:24:14.103 20:33:26 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local asan_lib= 00:24:14.103 20:33:26 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:24:14.103 20:33:26 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:24:14.103 20:33:26 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libasan 00:24:14.103 20:33:26 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:24:14.103 20:33:26 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:24:14.103 20:33:26 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:24:14.103 20:33:26 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:24:14.103 20:33:26 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:24:14.103 20:33:26 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:24:14.103 20:33:26 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:24:14.103 20:33:26 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:24:14.103 20:33:26 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:24:14.103 20:33:26 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme' 00:24:14.103 20:33:26 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:24:14.362 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:24:14.362 fio-3.35 00:24:14.362 Starting 1 thread 00:24:14.362 EAL: No free 2048 kB hugepages reported on node 1 00:24:16.893 00:24:16.893 test: (groupid=0, jobs=1): err= 0: pid=3161977: Thu May 16 20:33:29 2024 00:24:16.893 read: IOPS=17.3k, BW=67.4MiB/s (70.7MB/s)(135MiB/2004msec) 00:24:16.893 slat (nsec): min=1400, max=29439, avg=1520.88, stdev=437.88 00:24:16.893 clat (usec): min=1726, max=6729, avg=3679.76, stdev=83.94 00:24:16.893 lat (usec): min=1741, max=6731, avg=3681.28, stdev=83.83 00:24:16.893 clat percentiles (usec): 00:24:16.893 | 1.00th=[ 3621], 5.00th=[ 3654], 10.00th=[ 3654], 20.00th=[ 3654], 00:24:16.893 | 30.00th=[ 3654], 40.00th=[ 3687], 50.00th=[ 3687], 60.00th=[ 3687], 00:24:16.893 | 70.00th=[ 3687], 80.00th=[ 3687], 90.00th=[ 3687], 95.00th=[ 3720], 00:24:16.893 | 99.00th=[ 3785], 99.50th=[ 4015], 99.90th=[ 4359], 99.95th=[ 5735], 00:24:16.893 | 99.99th=[ 6652] 00:24:16.893 bw ( KiB/s): min=67528, max=69848, per=100.00%, avg=69064.00, stdev=1045.91, samples=4 00:24:16.893 iops : min=16882, max=17462, avg=17266.00, stdev=261.48, samples=4 00:24:16.893 write: IOPS=17.3k, BW=67.5MiB/s (70.8MB/s)(135MiB/2004msec); 0 zone resets 00:24:16.893 slat (nsec): min=1446, max=17486, avg=1617.96, stdev=399.00 00:24:16.893 clat (usec): min=2535, max=6723, avg=3679.35, stdev=94.58 00:24:16.893 lat (usec): min=2546, max=6725, avg=3680.97, stdev=94.48 00:24:16.893 clat percentiles (usec): 00:24:16.893 | 1.00th=[ 3621], 5.00th=[ 3654], 10.00th=[ 3654], 20.00th=[ 3654], 00:24:16.893 | 30.00th=[ 3654], 40.00th=[ 3687], 50.00th=[ 3687], 60.00th=[ 3687], 00:24:16.893 | 70.00th=[ 3687], 80.00th=[ 3687], 90.00th=[ 3687], 95.00th=[ 3720], 00:24:16.893 | 99.00th=[ 3785], 99.50th=[ 4015], 99.90th=[ 5276], 99.95th=[ 6194], 00:24:16.893 | 99.99th=[ 6718] 00:24:16.893 bw ( KiB/s): min=67680, max=69808, per=99.99%, avg=69104.00, stdev=963.08, samples=4 00:24:16.893 iops : min=16920, max=17452, avg=17276.00, stdev=240.77, samples=4 00:24:16.893 lat (msec) : 2=0.01%, 4=99.47%, 10=0.53% 00:24:16.893 cpu : usr=99.65%, sys=0.00%, ctx=15, majf=0, minf=2 00:24:16.893 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:24:16.893 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:16.893 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:16.893 issued rwts: total=34585,34626,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:16.893 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:16.893 00:24:16.893 Run status group 0 (all jobs): 00:24:16.893 READ: bw=67.4MiB/s (70.7MB/s), 67.4MiB/s-67.4MiB/s (70.7MB/s-70.7MB/s), io=135MiB (142MB), run=2004-2004msec 00:24:16.893 WRITE: bw=67.5MiB/s (70.8MB/s), 67.5MiB/s-67.5MiB/s (70.8MB/s-70.8MB/s), io=135MiB (142MB), run=2004-2004msec 00:24:16.893 20:33:29 nvmf_rdma.nvmf_fio_host -- host/fio.sh@43 -- # fio_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' 00:24:16.893 20:33:29 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' 00:24:16.893 20:33:29 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:24:16.893 20:33:29 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:16.893 20:33:29 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1335 -- # local sanitizers 00:24:16.893 20:33:29 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:24:16.893 20:33:29 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1337 -- # shift 00:24:16.893 20:33:29 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local asan_lib= 00:24:16.893 20:33:29 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:24:16.893 20:33:29 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:24:16.893 20:33:29 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libasan 00:24:16.893 20:33:29 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:24:16.893 20:33:29 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:24:16.893 20:33:29 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:24:16.893 20:33:29 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:24:16.893 20:33:29 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:24:16.894 20:33:29 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:24:16.894 20:33:29 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:24:16.894 20:33:29 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:24:16.894 20:33:29 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:24:16.894 20:33:29 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme' 00:24:16.894 20:33:29 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' 00:24:16.894 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:24:16.894 fio-3.35 00:24:16.894 Starting 1 thread 00:24:17.152 EAL: No free 2048 kB hugepages reported on node 1 00:24:19.683 00:24:19.683 test: (groupid=0, jobs=1): err= 0: pid=3162554: Thu May 16 20:33:32 2024 00:24:19.683 read: IOPS=12.7k, BW=199MiB/s (209MB/s)(391MiB/1964msec) 00:24:19.683 slat (nsec): min=2305, max=42116, avg=2698.72, stdev=1373.47 00:24:19.683 clat (usec): min=259, max=9996, avg=1915.09, stdev=1313.19 00:24:19.683 lat (usec): min=261, max=10001, avg=1917.79, stdev=1313.82 00:24:19.683 clat percentiles (usec): 00:24:19.683 | 1.00th=[ 586], 5.00th=[ 840], 10.00th=[ 988], 20.00th=[ 1156], 00:24:19.683 | 30.00th=[ 1287], 40.00th=[ 1418], 50.00th=[ 1532], 60.00th=[ 1696], 00:24:19.683 | 70.00th=[ 1876], 80.00th=[ 2147], 90.00th=[ 3097], 95.00th=[ 5276], 00:24:19.683 | 99.00th=[ 7242], 99.50th=[ 8029], 99.90th=[ 8848], 99.95th=[ 9503], 00:24:19.683 | 99.99th=[ 9896] 00:24:19.683 bw ( KiB/s): min=96640, max=99840, per=48.43%, avg=98704.00, stdev=1427.38, samples=4 00:24:19.683 iops : min= 6040, max= 6240, avg=6169.00, stdev=89.21, samples=4 00:24:19.683 write: IOPS=7058, BW=110MiB/s (116MB/s)(201MiB/1819msec); 0 zone resets 00:24:19.683 slat (usec): min=27, max=118, avg=29.54, stdev= 6.36 00:24:19.683 clat (usec): min=4483, max=22178, avg=14296.08, stdev=1961.85 00:24:19.683 lat (usec): min=4511, max=22208, avg=14325.62, stdev=1961.43 00:24:19.683 clat percentiles (usec): 00:24:19.683 | 1.00th=[ 7504], 5.00th=[11731], 10.00th=[12256], 20.00th=[12911], 00:24:19.683 | 30.00th=[13304], 40.00th=[13698], 50.00th=[14222], 60.00th=[14615], 00:24:19.683 | 70.00th=[15139], 80.00th=[15664], 90.00th=[16581], 95.00th=[17433], 00:24:19.683 | 99.00th=[19530], 99.50th=[20317], 99.90th=[21627], 99.95th=[21890], 00:24:19.683 | 99.99th=[22152] 00:24:19.683 bw ( KiB/s): min=97824, max=103648, per=90.27%, avg=101944.00, stdev=2758.12, samples=4 00:24:19.683 iops : min= 6114, max= 6478, avg=6371.50, stdev=172.38, samples=4 00:24:19.683 lat (usec) : 500=0.32%, 750=1.82%, 1000=4.92% 00:24:19.683 lat (msec) : 2=42.72%, 4=10.89%, 10=5.93%, 20=33.15%, 50=0.26% 00:24:19.683 cpu : usr=97.01%, sys=1.30%, ctx=183, majf=0, minf=1 00:24:19.683 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:24:19.683 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:19.683 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:19.683 issued rwts: total=25016,12839,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:19.683 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:19.683 00:24:19.683 Run status group 0 (all jobs): 00:24:19.683 READ: bw=199MiB/s (209MB/s), 199MiB/s-199MiB/s (209MB/s-209MB/s), io=391MiB (410MB), run=1964-1964msec 00:24:19.683 WRITE: bw=110MiB/s (116MB/s), 110MiB/s-110MiB/s (116MB/s-116MB/s), io=201MiB (210MB), run=1819-1819msec 00:24:19.683 20:33:32 nvmf_rdma.nvmf_fio_host -- host/fio.sh@45 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:19.683 20:33:32 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:19.683 20:33:32 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:19.683 20:33:32 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:19.683 20:33:32 nvmf_rdma.nvmf_fio_host -- host/fio.sh@47 -- # '[' 0 -eq 1 ']' 00:24:19.683 20:33:32 nvmf_rdma.nvmf_fio_host -- host/fio.sh@81 -- # trap - SIGINT SIGTERM EXIT 00:24:19.683 20:33:32 nvmf_rdma.nvmf_fio_host -- host/fio.sh@83 -- # rm -f ./local-test-0-verify.state 00:24:19.683 20:33:32 nvmf_rdma.nvmf_fio_host -- host/fio.sh@84 -- # nvmftestfini 00:24:19.683 20:33:32 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:19.683 20:33:32 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:24:19.683 20:33:32 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:24:19.683 20:33:32 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:24:19.683 20:33:32 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:24:19.683 20:33:32 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:19.683 20:33:32 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:24:19.683 rmmod nvme_rdma 00:24:19.683 rmmod nvme_fabrics 00:24:19.683 20:33:32 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:19.683 20:33:32 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:24:19.683 20:33:32 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:24:19.683 20:33:32 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 3161615 ']' 00:24:19.683 20:33:32 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 3161615 00:24:19.683 20:33:32 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@946 -- # '[' -z 3161615 ']' 00:24:19.683 20:33:32 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@950 -- # kill -0 3161615 00:24:19.683 20:33:32 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@951 -- # uname 00:24:19.683 20:33:32 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:24:19.683 20:33:32 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3161615 00:24:19.683 20:33:32 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:24:19.683 20:33:32 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:24:19.683 20:33:32 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3161615' 00:24:19.683 killing process with pid 3161615 00:24:19.683 20:33:32 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@965 -- # kill 3161615 00:24:19.683 [2024-05-16 20:33:32.420820] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:24:19.683 20:33:32 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@970 -- # wait 3161615 00:24:19.683 [2024-05-16 20:33:32.501202] rdma.c:2885:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:24:19.942 20:33:32 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:19.942 20:33:32 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:24:19.942 00:24:19.942 real 0m12.848s 00:24:19.942 user 0m43.527s 00:24:19.942 sys 0m5.199s 00:24:19.942 20:33:32 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1122 -- # xtrace_disable 00:24:19.942 20:33:32 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:19.942 ************************************ 00:24:19.942 END TEST nvmf_fio_host 00:24:19.942 ************************************ 00:24:19.942 20:33:32 nvmf_rdma -- nvmf/nvmf.sh@99 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=rdma 00:24:19.942 20:33:32 nvmf_rdma -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:24:19.942 20:33:32 nvmf_rdma -- common/autotest_common.sh@1103 -- # xtrace_disable 00:24:19.942 20:33:32 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:24:19.942 ************************************ 00:24:19.942 START TEST nvmf_failover 00:24:19.942 ************************************ 00:24:19.942 20:33:32 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=rdma 00:24:19.942 * Looking for test storage... 00:24:19.942 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:24:19.942 20:33:32 nvmf_rdma.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:24:19.943 20:33:32 nvmf_rdma.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:24:19.943 20:33:32 nvmf_rdma.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:19.943 20:33:32 nvmf_rdma.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:19.943 20:33:32 nvmf_rdma.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:19.943 20:33:32 nvmf_rdma.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:19.943 20:33:32 nvmf_rdma.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:19.943 20:33:32 nvmf_rdma.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:19.943 20:33:32 nvmf_rdma.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:19.943 20:33:32 nvmf_rdma.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:19.943 20:33:32 nvmf_rdma.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:19.943 20:33:32 nvmf_rdma.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:19.943 20:33:32 nvmf_rdma.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:24:19.943 20:33:32 nvmf_rdma.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:24:19.943 20:33:32 nvmf_rdma.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:19.943 20:33:32 nvmf_rdma.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:19.943 20:33:32 nvmf_rdma.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:19.943 20:33:32 nvmf_rdma.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:19.943 20:33:32 nvmf_rdma.nvmf_failover -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:24:19.943 20:33:32 nvmf_rdma.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:19.943 20:33:32 nvmf_rdma.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:19.943 20:33:32 nvmf_rdma.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:19.943 20:33:32 nvmf_rdma.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:19.943 20:33:32 nvmf_rdma.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:19.943 20:33:32 nvmf_rdma.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:19.943 20:33:32 nvmf_rdma.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:24:19.943 20:33:32 nvmf_rdma.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:19.943 20:33:32 nvmf_rdma.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:24:19.943 20:33:32 nvmf_rdma.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:19.943 20:33:32 nvmf_rdma.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:19.943 20:33:32 nvmf_rdma.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:19.943 20:33:32 nvmf_rdma.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:19.943 20:33:32 nvmf_rdma.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:19.943 20:33:32 nvmf_rdma.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:19.943 20:33:32 nvmf_rdma.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:19.943 20:33:32 nvmf_rdma.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:19.943 20:33:32 nvmf_rdma.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:19.943 20:33:32 nvmf_rdma.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:19.943 20:33:32 nvmf_rdma.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:24:19.943 20:33:32 nvmf_rdma.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:19.943 20:33:32 nvmf_rdma.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:24:19.943 20:33:32 nvmf_rdma.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:24:19.943 20:33:32 nvmf_rdma.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:19.943 20:33:32 nvmf_rdma.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:19.943 20:33:32 nvmf_rdma.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:19.943 20:33:32 nvmf_rdma.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:19.943 20:33:32 nvmf_rdma.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:19.943 20:33:32 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:19.943 20:33:32 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:19.943 20:33:32 nvmf_rdma.nvmf_failover -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:19.943 20:33:32 nvmf_rdma.nvmf_failover -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:19.943 20:33:32 nvmf_rdma.nvmf_failover -- nvmf/common.sh@285 -- # xtrace_disable 00:24:19.943 20:33:32 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:26.521 20:33:38 nvmf_rdma.nvmf_failover -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:26.521 20:33:38 nvmf_rdma.nvmf_failover -- nvmf/common.sh@291 -- # pci_devs=() 00:24:26.521 20:33:38 nvmf_rdma.nvmf_failover -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:26.521 20:33:38 nvmf_rdma.nvmf_failover -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:26.521 20:33:38 nvmf_rdma.nvmf_failover -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:26.521 20:33:38 nvmf_rdma.nvmf_failover -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:26.521 20:33:38 nvmf_rdma.nvmf_failover -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:26.521 20:33:38 nvmf_rdma.nvmf_failover -- nvmf/common.sh@295 -- # net_devs=() 00:24:26.521 20:33:38 nvmf_rdma.nvmf_failover -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:26.521 20:33:38 nvmf_rdma.nvmf_failover -- nvmf/common.sh@296 -- # e810=() 00:24:26.521 20:33:38 nvmf_rdma.nvmf_failover -- nvmf/common.sh@296 -- # local -ga e810 00:24:26.521 20:33:38 nvmf_rdma.nvmf_failover -- nvmf/common.sh@297 -- # x722=() 00:24:26.521 20:33:38 nvmf_rdma.nvmf_failover -- nvmf/common.sh@297 -- # local -ga x722 00:24:26.521 20:33:38 nvmf_rdma.nvmf_failover -- nvmf/common.sh@298 -- # mlx=() 00:24:26.521 20:33:38 nvmf_rdma.nvmf_failover -- nvmf/common.sh@298 -- # local -ga mlx 00:24:26.521 20:33:38 nvmf_rdma.nvmf_failover -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:26.521 20:33:38 nvmf_rdma.nvmf_failover -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:26.521 20:33:38 nvmf_rdma.nvmf_failover -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:26.521 20:33:38 nvmf_rdma.nvmf_failover -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:26.521 20:33:38 nvmf_rdma.nvmf_failover -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:26.522 20:33:38 nvmf_rdma.nvmf_failover -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:26.522 20:33:38 nvmf_rdma.nvmf_failover -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:26.522 20:33:38 nvmf_rdma.nvmf_failover -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:26.522 20:33:38 nvmf_rdma.nvmf_failover -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:26.522 20:33:38 nvmf_rdma.nvmf_failover -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:26.522 20:33:38 nvmf_rdma.nvmf_failover -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:26.522 20:33:38 nvmf_rdma.nvmf_failover -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:26.522 20:33:38 nvmf_rdma.nvmf_failover -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:24:26.522 20:33:38 nvmf_rdma.nvmf_failover -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:24:26.522 20:33:38 nvmf_rdma.nvmf_failover -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:24:26.522 20:33:38 nvmf_rdma.nvmf_failover -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:24:26.522 20:33:38 nvmf_rdma.nvmf_failover -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:24:26.522 20:33:38 nvmf_rdma.nvmf_failover -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:26.522 20:33:38 nvmf_rdma.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:26.522 20:33:38 nvmf_rdma.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:24:26.522 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:24:26.522 20:33:38 nvmf_rdma.nvmf_failover -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:24:26.522 20:33:38 nvmf_rdma.nvmf_failover -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:24:26.522 20:33:38 nvmf_rdma.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:24:26.522 20:33:38 nvmf_rdma.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:24:26.522 20:33:38 nvmf_rdma.nvmf_failover -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:24:26.522 20:33:38 nvmf_rdma.nvmf_failover -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:24:26.522 20:33:38 nvmf_rdma.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:26.522 20:33:38 nvmf_rdma.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:24:26.522 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:24:26.522 20:33:38 nvmf_rdma.nvmf_failover -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:24:26.522 20:33:38 nvmf_rdma.nvmf_failover -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:24:26.522 20:33:38 nvmf_rdma.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:24:26.522 20:33:38 nvmf_rdma.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:24:26.522 20:33:38 nvmf_rdma.nvmf_failover -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:24:26.522 20:33:38 nvmf_rdma.nvmf_failover -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:24:26.522 20:33:38 nvmf_rdma.nvmf_failover -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:26.522 20:33:38 nvmf_rdma.nvmf_failover -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:24:26.522 20:33:38 nvmf_rdma.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:26.522 20:33:38 nvmf_rdma.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:26.522 20:33:38 nvmf_rdma.nvmf_failover -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:24:26.522 20:33:38 nvmf_rdma.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:26.522 20:33:38 nvmf_rdma.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:26.522 20:33:38 nvmf_rdma.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:24:26.522 Found net devices under 0000:da:00.0: mlx_0_0 00:24:26.522 20:33:38 nvmf_rdma.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:26.522 20:33:38 nvmf_rdma.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:26.522 20:33:38 nvmf_rdma.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:26.522 20:33:38 nvmf_rdma.nvmf_failover -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:24:26.522 20:33:38 nvmf_rdma.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:26.522 20:33:38 nvmf_rdma.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:26.522 20:33:38 nvmf_rdma.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:24:26.522 Found net devices under 0000:da:00.1: mlx_0_1 00:24:26.522 20:33:38 nvmf_rdma.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:26.522 20:33:38 nvmf_rdma.nvmf_failover -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:26.522 20:33:38 nvmf_rdma.nvmf_failover -- nvmf/common.sh@414 -- # is_hw=yes 00:24:26.522 20:33:38 nvmf_rdma.nvmf_failover -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:26.522 20:33:38 nvmf_rdma.nvmf_failover -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:24:26.522 20:33:38 nvmf_rdma.nvmf_failover -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:24:26.522 20:33:38 nvmf_rdma.nvmf_failover -- nvmf/common.sh@420 -- # rdma_device_init 00:24:26.522 20:33:38 nvmf_rdma.nvmf_failover -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:24:26.522 20:33:38 nvmf_rdma.nvmf_failover -- nvmf/common.sh@58 -- # uname 00:24:26.522 20:33:38 nvmf_rdma.nvmf_failover -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:24:26.522 20:33:38 nvmf_rdma.nvmf_failover -- nvmf/common.sh@62 -- # modprobe ib_cm 00:24:26.522 20:33:38 nvmf_rdma.nvmf_failover -- nvmf/common.sh@63 -- # modprobe ib_core 00:24:26.522 20:33:38 nvmf_rdma.nvmf_failover -- nvmf/common.sh@64 -- # modprobe ib_umad 00:24:26.522 20:33:38 nvmf_rdma.nvmf_failover -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:24:26.522 20:33:38 nvmf_rdma.nvmf_failover -- nvmf/common.sh@66 -- # modprobe iw_cm 00:24:26.522 20:33:38 nvmf_rdma.nvmf_failover -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:24:26.522 20:33:38 nvmf_rdma.nvmf_failover -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:24:26.522 20:33:38 nvmf_rdma.nvmf_failover -- nvmf/common.sh@502 -- # allocate_nic_ips 00:24:26.522 20:33:38 nvmf_rdma.nvmf_failover -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:24:26.522 20:33:38 nvmf_rdma.nvmf_failover -- nvmf/common.sh@73 -- # get_rdma_if_list 00:24:26.522 20:33:38 nvmf_rdma.nvmf_failover -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:24:26.522 20:33:38 nvmf_rdma.nvmf_failover -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:24:26.522 20:33:38 nvmf_rdma.nvmf_failover -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:24:26.522 20:33:38 nvmf_rdma.nvmf_failover -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:24:26.522 20:33:38 nvmf_rdma.nvmf_failover -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:24:26.522 20:33:38 nvmf_rdma.nvmf_failover -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:24:26.522 20:33:38 nvmf_rdma.nvmf_failover -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:26.522 20:33:38 nvmf_rdma.nvmf_failover -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:24:26.522 20:33:38 nvmf_rdma.nvmf_failover -- nvmf/common.sh@104 -- # echo mlx_0_0 00:24:26.522 20:33:38 nvmf_rdma.nvmf_failover -- nvmf/common.sh@105 -- # continue 2 00:24:26.522 20:33:38 nvmf_rdma.nvmf_failover -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:24:26.522 20:33:38 nvmf_rdma.nvmf_failover -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:26.522 20:33:38 nvmf_rdma.nvmf_failover -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:24:26.522 20:33:38 nvmf_rdma.nvmf_failover -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:26.522 20:33:38 nvmf_rdma.nvmf_failover -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:24:26.522 20:33:38 nvmf_rdma.nvmf_failover -- nvmf/common.sh@104 -- # echo mlx_0_1 00:24:26.522 20:33:38 nvmf_rdma.nvmf_failover -- nvmf/common.sh@105 -- # continue 2 00:24:26.522 20:33:38 nvmf_rdma.nvmf_failover -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:24:26.522 20:33:38 nvmf_rdma.nvmf_failover -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:24:26.522 20:33:38 nvmf_rdma.nvmf_failover -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:24:26.522 20:33:38 nvmf_rdma.nvmf_failover -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:24:26.522 20:33:38 nvmf_rdma.nvmf_failover -- nvmf/common.sh@113 -- # awk '{print $4}' 00:24:26.522 20:33:38 nvmf_rdma.nvmf_failover -- nvmf/common.sh@113 -- # cut -d/ -f1 00:24:26.522 20:33:38 nvmf_rdma.nvmf_failover -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:24:26.522 20:33:38 nvmf_rdma.nvmf_failover -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:24:26.522 20:33:38 nvmf_rdma.nvmf_failover -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:24:26.522 262: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:24:26.522 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:24:26.522 altname enp218s0f0np0 00:24:26.522 altname ens818f0np0 00:24:26.522 inet 192.168.100.8/24 scope global mlx_0_0 00:24:26.522 valid_lft forever preferred_lft forever 00:24:26.522 20:33:38 nvmf_rdma.nvmf_failover -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:24:26.522 20:33:38 nvmf_rdma.nvmf_failover -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:24:26.522 20:33:38 nvmf_rdma.nvmf_failover -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:24:26.522 20:33:38 nvmf_rdma.nvmf_failover -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:24:26.522 20:33:38 nvmf_rdma.nvmf_failover -- nvmf/common.sh@113 -- # awk '{print $4}' 00:24:26.522 20:33:38 nvmf_rdma.nvmf_failover -- nvmf/common.sh@113 -- # cut -d/ -f1 00:24:26.522 20:33:38 nvmf_rdma.nvmf_failover -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:24:26.522 20:33:38 nvmf_rdma.nvmf_failover -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:24:26.522 20:33:38 nvmf_rdma.nvmf_failover -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:24:26.522 263: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:24:26.522 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:24:26.522 altname enp218s0f1np1 00:24:26.522 altname ens818f1np1 00:24:26.522 inet 192.168.100.9/24 scope global mlx_0_1 00:24:26.522 valid_lft forever preferred_lft forever 00:24:26.522 20:33:38 nvmf_rdma.nvmf_failover -- nvmf/common.sh@422 -- # return 0 00:24:26.522 20:33:38 nvmf_rdma.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:26.522 20:33:38 nvmf_rdma.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:24:26.523 20:33:38 nvmf_rdma.nvmf_failover -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:24:26.523 20:33:38 nvmf_rdma.nvmf_failover -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:24:26.523 20:33:38 nvmf_rdma.nvmf_failover -- nvmf/common.sh@86 -- # get_rdma_if_list 00:24:26.523 20:33:38 nvmf_rdma.nvmf_failover -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:24:26.523 20:33:38 nvmf_rdma.nvmf_failover -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:24:26.523 20:33:38 nvmf_rdma.nvmf_failover -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:24:26.523 20:33:38 nvmf_rdma.nvmf_failover -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:24:26.523 20:33:38 nvmf_rdma.nvmf_failover -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:24:26.523 20:33:38 nvmf_rdma.nvmf_failover -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:24:26.523 20:33:38 nvmf_rdma.nvmf_failover -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:26.523 20:33:38 nvmf_rdma.nvmf_failover -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:24:26.523 20:33:38 nvmf_rdma.nvmf_failover -- nvmf/common.sh@104 -- # echo mlx_0_0 00:24:26.523 20:33:38 nvmf_rdma.nvmf_failover -- nvmf/common.sh@105 -- # continue 2 00:24:26.523 20:33:38 nvmf_rdma.nvmf_failover -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:24:26.523 20:33:38 nvmf_rdma.nvmf_failover -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:26.523 20:33:38 nvmf_rdma.nvmf_failover -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:24:26.523 20:33:38 nvmf_rdma.nvmf_failover -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:26.523 20:33:38 nvmf_rdma.nvmf_failover -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:24:26.523 20:33:38 nvmf_rdma.nvmf_failover -- nvmf/common.sh@104 -- # echo mlx_0_1 00:24:26.523 20:33:38 nvmf_rdma.nvmf_failover -- nvmf/common.sh@105 -- # continue 2 00:24:26.523 20:33:38 nvmf_rdma.nvmf_failover -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:24:26.523 20:33:38 nvmf_rdma.nvmf_failover -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:24:26.523 20:33:38 nvmf_rdma.nvmf_failover -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:24:26.523 20:33:38 nvmf_rdma.nvmf_failover -- nvmf/common.sh@113 -- # cut -d/ -f1 00:24:26.523 20:33:38 nvmf_rdma.nvmf_failover -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:24:26.523 20:33:38 nvmf_rdma.nvmf_failover -- nvmf/common.sh@113 -- # awk '{print $4}' 00:24:26.523 20:33:38 nvmf_rdma.nvmf_failover -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:24:26.523 20:33:38 nvmf_rdma.nvmf_failover -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:24:26.523 20:33:38 nvmf_rdma.nvmf_failover -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:24:26.523 20:33:38 nvmf_rdma.nvmf_failover -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:24:26.523 20:33:38 nvmf_rdma.nvmf_failover -- nvmf/common.sh@113 -- # awk '{print $4}' 00:24:26.523 20:33:38 nvmf_rdma.nvmf_failover -- nvmf/common.sh@113 -- # cut -d/ -f1 00:24:26.523 20:33:38 nvmf_rdma.nvmf_failover -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:24:26.523 192.168.100.9' 00:24:26.523 20:33:38 nvmf_rdma.nvmf_failover -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:24:26.523 192.168.100.9' 00:24:26.523 20:33:38 nvmf_rdma.nvmf_failover -- nvmf/common.sh@457 -- # head -n 1 00:24:26.523 20:33:38 nvmf_rdma.nvmf_failover -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:24:26.523 20:33:38 nvmf_rdma.nvmf_failover -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:24:26.523 192.168.100.9' 00:24:26.523 20:33:38 nvmf_rdma.nvmf_failover -- nvmf/common.sh@458 -- # tail -n +2 00:24:26.523 20:33:38 nvmf_rdma.nvmf_failover -- nvmf/common.sh@458 -- # head -n 1 00:24:26.523 20:33:38 nvmf_rdma.nvmf_failover -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:24:26.523 20:33:38 nvmf_rdma.nvmf_failover -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:24:26.523 20:33:38 nvmf_rdma.nvmf_failover -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:24:26.523 20:33:38 nvmf_rdma.nvmf_failover -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:24:26.523 20:33:38 nvmf_rdma.nvmf_failover -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:24:26.523 20:33:38 nvmf_rdma.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:24:26.523 20:33:38 nvmf_rdma.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:24:26.523 20:33:38 nvmf_rdma.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:26.523 20:33:38 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@720 -- # xtrace_disable 00:24:26.523 20:33:38 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:26.523 20:33:38 nvmf_rdma.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=3166312 00:24:26.523 20:33:38 nvmf_rdma.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 3166312 00:24:26.523 20:33:38 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@827 -- # '[' -z 3166312 ']' 00:24:26.523 20:33:38 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:26.523 20:33:38 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:26.523 20:33:38 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:26.523 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:26.523 20:33:38 nvmf_rdma.nvmf_failover -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:24:26.523 20:33:38 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:26.523 20:33:38 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:26.523 [2024-05-16 20:33:38.657944] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:24:26.523 [2024-05-16 20:33:38.657987] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:26.523 EAL: No free 2048 kB hugepages reported on node 1 00:24:26.523 [2024-05-16 20:33:38.717523] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:26.523 [2024-05-16 20:33:38.795142] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:26.523 [2024-05-16 20:33:38.795177] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:26.523 [2024-05-16 20:33:38.795183] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:26.523 [2024-05-16 20:33:38.795189] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:26.523 [2024-05-16 20:33:38.795197] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:26.523 [2024-05-16 20:33:38.795310] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:26.523 [2024-05-16 20:33:38.795330] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:26.523 [2024-05-16 20:33:38.795331] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:26.523 20:33:39 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:26.523 20:33:39 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@860 -- # return 0 00:24:26.523 20:33:39 nvmf_rdma.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:26.523 20:33:39 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:26.523 20:33:39 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:26.523 20:33:39 nvmf_rdma.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:26.523 20:33:39 nvmf_rdma.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:24:26.783 [2024-05-16 20:33:39.668360] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xa91110/0xa95600) succeed. 00:24:26.783 [2024-05-16 20:33:39.678732] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xa926b0/0xad6c90) succeed. 00:24:27.042 20:33:39 nvmf_rdma.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:24:27.042 Malloc0 00:24:27.042 20:33:39 nvmf_rdma.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:27.301 20:33:40 nvmf_rdma.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:27.561 20:33:40 nvmf_rdma.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:24:27.561 [2024-05-16 20:33:40.526204] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:24:27.561 [2024-05-16 20:33:40.526622] rdma.c:3032:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:24:27.820 20:33:40 nvmf_rdma.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:24:27.820 [2024-05-16 20:33:40.694886] rdma.c:3032:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:24:27.820 20:33:40 nvmf_rdma.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4422 00:24:28.079 [2024-05-16 20:33:40.875562] rdma.c:3032:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4422 *** 00:24:28.079 20:33:40 nvmf_rdma.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=3166621 00:24:28.079 20:33:40 nvmf_rdma.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:24:28.079 20:33:40 nvmf_rdma.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:28.079 20:33:40 nvmf_rdma.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 3166621 /var/tmp/bdevperf.sock 00:24:28.079 20:33:40 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@827 -- # '[' -z 3166621 ']' 00:24:28.079 20:33:40 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:28.079 20:33:40 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:28.079 20:33:40 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:28.079 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:28.079 20:33:40 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:28.079 20:33:40 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:29.016 20:33:41 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:29.016 20:33:41 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@860 -- # return 0 00:24:29.016 20:33:41 nvmf_rdma.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:29.016 NVMe0n1 00:24:29.275 20:33:42 nvmf_rdma.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:29.275 00:24:29.275 20:33:42 nvmf_rdma.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:29.275 20:33:42 nvmf_rdma.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=3166851 00:24:29.275 20:33:42 nvmf_rdma.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:24:30.654 20:33:43 nvmf_rdma.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:24:30.654 20:33:43 nvmf_rdma.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:24:33.942 20:33:46 nvmf_rdma.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:33.942 00:24:33.942 20:33:46 nvmf_rdma.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:24:33.942 20:33:46 nvmf_rdma.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:24:37.228 20:33:49 nvmf_rdma.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:24:37.228 [2024-05-16 20:33:50.015279] rdma.c:3032:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:24:37.228 20:33:50 nvmf_rdma.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:24:38.165 20:33:51 nvmf_rdma.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4422 00:24:38.424 20:33:51 nvmf_rdma.nvmf_failover -- host/failover.sh@59 -- # wait 3166851 00:24:44.997 0 00:24:44.997 20:33:57 nvmf_rdma.nvmf_failover -- host/failover.sh@61 -- # killprocess 3166621 00:24:44.997 20:33:57 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@946 -- # '[' -z 3166621 ']' 00:24:44.997 20:33:57 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@950 -- # kill -0 3166621 00:24:44.997 20:33:57 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@951 -- # uname 00:24:44.997 20:33:57 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:24:44.997 20:33:57 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3166621 00:24:44.997 20:33:57 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:24:44.997 20:33:57 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:24:44.997 20:33:57 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3166621' 00:24:44.997 killing process with pid 3166621 00:24:44.997 20:33:57 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@965 -- # kill 3166621 00:24:44.997 20:33:57 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@970 -- # wait 3166621 00:24:44.997 20:33:57 nvmf_rdma.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:44.997 [2024-05-16 20:33:40.946262] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:24:44.997 [2024-05-16 20:33:40.946318] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3166621 ] 00:24:44.997 EAL: No free 2048 kB hugepages reported on node 1 00:24:44.997 [2024-05-16 20:33:41.005733] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:44.997 [2024-05-16 20:33:41.081206] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:44.997 Running I/O for 15 seconds... 00:24:44.997 [2024-05-16 20:33:44.398559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19576 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e0000 len:0x1000 key:0x3b200 00:24:44.997 [2024-05-16 20:33:44.398599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:44.997 [2024-05-16 20:33:44.398615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:19584 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075de000 len:0x1000 key:0x3b200 00:24:44.997 [2024-05-16 20:33:44.398623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:44.997 [2024-05-16 20:33:44.398632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:19592 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075dc000 len:0x1000 key:0x3b200 00:24:44.997 [2024-05-16 20:33:44.398639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:44.997 [2024-05-16 20:33:44.398647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:19600 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075da000 len:0x1000 key:0x3b200 00:24:44.997 [2024-05-16 20:33:44.398654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:44.997 [2024-05-16 20:33:44.398662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:19608 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d8000 len:0x1000 key:0x3b200 00:24:44.997 [2024-05-16 20:33:44.398668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:44.997 [2024-05-16 20:33:44.398676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:19616 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d6000 len:0x1000 key:0x3b200 00:24:44.997 [2024-05-16 20:33:44.398682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:44.997 [2024-05-16 20:33:44.398690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19624 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d4000 len:0x1000 key:0x3b200 00:24:44.997 [2024-05-16 20:33:44.398696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:44.997 [2024-05-16 20:33:44.398704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19632 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d2000 len:0x1000 key:0x3b200 00:24:44.997 [2024-05-16 20:33:44.398710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:44.997 [2024-05-16 20:33:44.398718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:19640 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d0000 len:0x1000 key:0x3b200 00:24:44.997 [2024-05-16 20:33:44.398724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:44.997 [2024-05-16 20:33:44.398732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:19648 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ce000 len:0x1000 key:0x3b200 00:24:44.997 [2024-05-16 20:33:44.398738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:44.997 [2024-05-16 20:33:44.398752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:19656 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075cc000 len:0x1000 key:0x3b200 00:24:44.997 [2024-05-16 20:33:44.398759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:44.997 [2024-05-16 20:33:44.398767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:19664 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ca000 len:0x1000 key:0x3b200 00:24:44.997 [2024-05-16 20:33:44.398773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:44.997 [2024-05-16 20:33:44.398781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:19672 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c8000 len:0x1000 key:0x3b200 00:24:44.997 [2024-05-16 20:33:44.398787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:44.997 [2024-05-16 20:33:44.398795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:19680 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c6000 len:0x1000 key:0x3b200 00:24:44.997 [2024-05-16 20:33:44.398801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:44.997 [2024-05-16 20:33:44.398808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:19688 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c4000 len:0x1000 key:0x3b200 00:24:44.997 [2024-05-16 20:33:44.398815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:44.997 [2024-05-16 20:33:44.398822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:19696 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c2000 len:0x1000 key:0x3b200 00:24:44.997 [2024-05-16 20:33:44.398829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:44.997 [2024-05-16 20:33:44.398837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:19704 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c0000 len:0x1000 key:0x3b200 00:24:44.997 [2024-05-16 20:33:44.398844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:44.997 [2024-05-16 20:33:44.398852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:19712 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075be000 len:0x1000 key:0x3b200 00:24:44.997 [2024-05-16 20:33:44.398858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:44.997 [2024-05-16 20:33:44.398866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:19720 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075bc000 len:0x1000 key:0x3b200 00:24:44.997 [2024-05-16 20:33:44.398872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:44.997 [2024-05-16 20:33:44.398880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19728 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ba000 len:0x1000 key:0x3b200 00:24:44.997 [2024-05-16 20:33:44.398886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:44.997 [2024-05-16 20:33:44.398894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:19736 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b8000 len:0x1000 key:0x3b200 00:24:44.997 [2024-05-16 20:33:44.398901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:44.997 [2024-05-16 20:33:44.398909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:19744 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b6000 len:0x1000 key:0x3b200 00:24:44.997 [2024-05-16 20:33:44.398917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:44.997 [2024-05-16 20:33:44.398925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19752 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b4000 len:0x1000 key:0x3b200 00:24:44.997 [2024-05-16 20:33:44.398931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:44.997 [2024-05-16 20:33:44.398939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:19760 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b2000 len:0x1000 key:0x3b200 00:24:44.997 [2024-05-16 20:33:44.398945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:44.997 [2024-05-16 20:33:44.398952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:19768 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b0000 len:0x1000 key:0x3b200 00:24:44.997 [2024-05-16 20:33:44.398959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:44.997 [2024-05-16 20:33:44.398966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:19776 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ae000 len:0x1000 key:0x3b200 00:24:44.997 [2024-05-16 20:33:44.398973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:44.997 [2024-05-16 20:33:44.398981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:19784 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ac000 len:0x1000 key:0x3b200 00:24:44.997 [2024-05-16 20:33:44.398987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:44.997 [2024-05-16 20:33:44.398995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:19792 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075aa000 len:0x1000 key:0x3b200 00:24:44.998 [2024-05-16 20:33:44.399001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:44.998 [2024-05-16 20:33:44.399008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:19800 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a8000 len:0x1000 key:0x3b200 00:24:44.998 [2024-05-16 20:33:44.399015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:44.998 [2024-05-16 20:33:44.399022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:19808 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a6000 len:0x1000 key:0x3b200 00:24:44.998 [2024-05-16 20:33:44.399029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:44.998 [2024-05-16 20:33:44.399036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:19816 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a4000 len:0x1000 key:0x3b200 00:24:44.998 [2024-05-16 20:33:44.399042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:44.998 [2024-05-16 20:33:44.399050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19824 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a2000 len:0x1000 key:0x3b200 00:24:44.998 [2024-05-16 20:33:44.399056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:44.998 [2024-05-16 20:33:44.399064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:19832 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a0000 len:0x1000 key:0x3b200 00:24:44.998 [2024-05-16 20:33:44.399071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:44.998 [2024-05-16 20:33:44.399083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:19840 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759e000 len:0x1000 key:0x3b200 00:24:44.998 [2024-05-16 20:33:44.399091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:44.998 [2024-05-16 20:33:44.399099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:19848 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759c000 len:0x1000 key:0x3b200 00:24:44.998 [2024-05-16 20:33:44.399105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:44.998 [2024-05-16 20:33:44.399113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:19856 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759a000 len:0x1000 key:0x3b200 00:24:44.998 [2024-05-16 20:33:44.399120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:44.998 [2024-05-16 20:33:44.399128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:19864 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007598000 len:0x1000 key:0x3b200 00:24:44.998 [2024-05-16 20:33:44.399134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:44.998 [2024-05-16 20:33:44.399142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:19872 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007596000 len:0x1000 key:0x3b200 00:24:44.998 [2024-05-16 20:33:44.399148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:44.998 [2024-05-16 20:33:44.399156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:19880 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007594000 len:0x1000 key:0x3b200 00:24:44.998 [2024-05-16 20:33:44.399162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:44.998 [2024-05-16 20:33:44.399170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:19888 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007592000 len:0x1000 key:0x3b200 00:24:44.998 [2024-05-16 20:33:44.399177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:44.998 [2024-05-16 20:33:44.399185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:19896 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007590000 len:0x1000 key:0x3b200 00:24:44.998 [2024-05-16 20:33:44.399192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:44.998 [2024-05-16 20:33:44.399200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:19904 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758e000 len:0x1000 key:0x3b200 00:24:44.998 [2024-05-16 20:33:44.399206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:44.998 [2024-05-16 20:33:44.399214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:19912 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758c000 len:0x1000 key:0x3b200 00:24:44.998 [2024-05-16 20:33:44.399220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:44.998 [2024-05-16 20:33:44.399228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:19920 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758a000 len:0x1000 key:0x3b200 00:24:44.998 [2024-05-16 20:33:44.399234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:44.998 [2024-05-16 20:33:44.399241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:19928 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007588000 len:0x1000 key:0x3b200 00:24:44.998 [2024-05-16 20:33:44.399249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:44.998 [2024-05-16 20:33:44.399257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:19936 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007586000 len:0x1000 key:0x3b200 00:24:44.998 [2024-05-16 20:33:44.399263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:44.998 [2024-05-16 20:33:44.399271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19944 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007584000 len:0x1000 key:0x3b200 00:24:44.998 [2024-05-16 20:33:44.399277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:44.998 [2024-05-16 20:33:44.399285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:19952 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007582000 len:0x1000 key:0x3b200 00:24:44.998 [2024-05-16 20:33:44.399291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:44.998 [2024-05-16 20:33:44.399298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:19960 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007580000 len:0x1000 key:0x3b200 00:24:44.998 [2024-05-16 20:33:44.399304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:44.998 [2024-05-16 20:33:44.399312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:19968 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757e000 len:0x1000 key:0x3b200 00:24:44.998 [2024-05-16 20:33:44.399319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:44.998 [2024-05-16 20:33:44.399327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:19976 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757c000 len:0x1000 key:0x3b200 00:24:44.998 [2024-05-16 20:33:44.399333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:44.998 [2024-05-16 20:33:44.399341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:19984 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757a000 len:0x1000 key:0x3b200 00:24:44.998 [2024-05-16 20:33:44.399347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:44.998 [2024-05-16 20:33:44.399354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:19992 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007578000 len:0x1000 key:0x3b200 00:24:44.998 [2024-05-16 20:33:44.399360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:44.998 [2024-05-16 20:33:44.399368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:20000 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007576000 len:0x1000 key:0x3b200 00:24:44.998 [2024-05-16 20:33:44.399375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:44.998 [2024-05-16 20:33:44.399383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:20008 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007574000 len:0x1000 key:0x3b200 00:24:44.998 [2024-05-16 20:33:44.399389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:44.998 [2024-05-16 20:33:44.399396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20016 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007572000 len:0x1000 key:0x3b200 00:24:44.998 [2024-05-16 20:33:44.399402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:44.998 [2024-05-16 20:33:44.399411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:20024 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007570000 len:0x1000 key:0x3b200 00:24:44.998 [2024-05-16 20:33:44.399418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:44.998 [2024-05-16 20:33:44.399429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:20032 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756e000 len:0x1000 key:0x3b200 00:24:44.998 [2024-05-16 20:33:44.399436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:44.998 [2024-05-16 20:33:44.399444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:20040 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756c000 len:0x1000 key:0x3b200 00:24:44.998 [2024-05-16 20:33:44.399450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:44.998 [2024-05-16 20:33:44.399458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20048 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756a000 len:0x1000 key:0x3b200 00:24:44.998 [2024-05-16 20:33:44.399464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:44.998 [2024-05-16 20:33:44.399472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:20056 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007568000 len:0x1000 key:0x3b200 00:24:44.998 [2024-05-16 20:33:44.399478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:44.998 [2024-05-16 20:33:44.399486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:20064 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007566000 len:0x1000 key:0x3b200 00:24:44.998 [2024-05-16 20:33:44.399492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:44.998 [2024-05-16 20:33:44.399500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:20072 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007564000 len:0x1000 key:0x3b200 00:24:44.998 [2024-05-16 20:33:44.399506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:44.998 [2024-05-16 20:33:44.399513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20080 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007562000 len:0x1000 key:0x3b200 00:24:44.998 [2024-05-16 20:33:44.399520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:44.998 [2024-05-16 20:33:44.399527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:20088 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007560000 len:0x1000 key:0x3b200 00:24:44.998 [2024-05-16 20:33:44.399533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:44.998 [2024-05-16 20:33:44.399542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:20096 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755e000 len:0x1000 key:0x3b200 00:24:44.998 [2024-05-16 20:33:44.399549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:44.999 [2024-05-16 20:33:44.399557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:20104 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755c000 len:0x1000 key:0x3b200 00:24:44.999 [2024-05-16 20:33:44.399563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:44.999 [2024-05-16 20:33:44.399570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:20112 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755a000 len:0x1000 key:0x3b200 00:24:44.999 [2024-05-16 20:33:44.399578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:44.999 [2024-05-16 20:33:44.399586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20120 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007558000 len:0x1000 key:0x3b200 00:24:44.999 [2024-05-16 20:33:44.399593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:44.999 [2024-05-16 20:33:44.399601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20128 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007556000 len:0x1000 key:0x3b200 00:24:44.999 [2024-05-16 20:33:44.399607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:44.999 [2024-05-16 20:33:44.399615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:20136 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007554000 len:0x1000 key:0x3b200 00:24:44.999 [2024-05-16 20:33:44.399621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:44.999 [2024-05-16 20:33:44.399629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:20144 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007552000 len:0x1000 key:0x3b200 00:24:44.999 [2024-05-16 20:33:44.399635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:44.999 [2024-05-16 20:33:44.399643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20152 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007550000 len:0x1000 key:0x3b200 00:24:44.999 [2024-05-16 20:33:44.399649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:44.999 [2024-05-16 20:33:44.399657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:20160 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754e000 len:0x1000 key:0x3b200 00:24:44.999 [2024-05-16 20:33:44.399663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:44.999 [2024-05-16 20:33:44.399671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:20168 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754c000 len:0x1000 key:0x3b200 00:24:44.999 [2024-05-16 20:33:44.399677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:44.999 [2024-05-16 20:33:44.399687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:20176 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754a000 len:0x1000 key:0x3b200 00:24:44.999 [2024-05-16 20:33:44.399694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:44.999 [2024-05-16 20:33:44.399702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:20184 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007548000 len:0x1000 key:0x3b200 00:24:44.999 [2024-05-16 20:33:44.399708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:44.999 [2024-05-16 20:33:44.399716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:20192 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007546000 len:0x1000 key:0x3b200 00:24:44.999 [2024-05-16 20:33:44.399722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:44.999 [2024-05-16 20:33:44.399730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20200 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007544000 len:0x1000 key:0x3b200 00:24:44.999 [2024-05-16 20:33:44.399736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:44.999 [2024-05-16 20:33:44.399745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20208 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007542000 len:0x1000 key:0x3b200 00:24:44.999 [2024-05-16 20:33:44.399751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:44.999 [2024-05-16 20:33:44.399759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:20216 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007540000 len:0x1000 key:0x3b200 00:24:44.999 [2024-05-16 20:33:44.399765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:44.999 [2024-05-16 20:33:44.399773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:20224 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753e000 len:0x1000 key:0x3b200 00:24:44.999 [2024-05-16 20:33:44.399779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:44.999 [2024-05-16 20:33:44.399787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:20232 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753c000 len:0x1000 key:0x3b200 00:24:44.999 [2024-05-16 20:33:44.399793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:44.999 [2024-05-16 20:33:44.399800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:20240 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753a000 len:0x1000 key:0x3b200 00:24:44.999 [2024-05-16 20:33:44.399807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:44.999 [2024-05-16 20:33:44.399814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:20248 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007538000 len:0x1000 key:0x3b200 00:24:44.999 [2024-05-16 20:33:44.399820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:44.999 [2024-05-16 20:33:44.399828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:20256 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007536000 len:0x1000 key:0x3b200 00:24:44.999 [2024-05-16 20:33:44.399834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:44.999 [2024-05-16 20:33:44.399842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:20264 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007534000 len:0x1000 key:0x3b200 00:24:44.999 [2024-05-16 20:33:44.399848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:44.999 [2024-05-16 20:33:44.399856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:20272 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007532000 len:0x1000 key:0x3b200 00:24:44.999 [2024-05-16 20:33:44.399862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:44.999 [2024-05-16 20:33:44.399870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:20280 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007530000 len:0x1000 key:0x3b200 00:24:44.999 [2024-05-16 20:33:44.399876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:44.999 [2024-05-16 20:33:44.399883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:20288 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752e000 len:0x1000 key:0x3b200 00:24:44.999 [2024-05-16 20:33:44.399889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:44.999 [2024-05-16 20:33:44.399897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:20296 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752c000 len:0x1000 key:0x3b200 00:24:44.999 [2024-05-16 20:33:44.399905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:44.999 [2024-05-16 20:33:44.399914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:20304 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752a000 len:0x1000 key:0x3b200 00:24:44.999 [2024-05-16 20:33:44.399920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:44.999 [2024-05-16 20:33:44.399928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:20312 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007528000 len:0x1000 key:0x3b200 00:24:44.999 [2024-05-16 20:33:44.399934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:44.999 [2024-05-16 20:33:44.399942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:20320 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007526000 len:0x1000 key:0x3b200 00:24:44.999 [2024-05-16 20:33:44.399948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:44.999 [2024-05-16 20:33:44.399955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:20328 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007524000 len:0x1000 key:0x3b200 00:24:44.999 [2024-05-16 20:33:44.399961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:44.999 [2024-05-16 20:33:44.399969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:20336 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007522000 len:0x1000 key:0x3b200 00:24:44.999 [2024-05-16 20:33:44.399975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:44.999 [2024-05-16 20:33:44.399983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:20344 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007520000 len:0x1000 key:0x3b200 00:24:44.999 [2024-05-16 20:33:44.399989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:44.999 [2024-05-16 20:33:44.399997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20352 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751e000 len:0x1000 key:0x3b200 00:24:44.999 [2024-05-16 20:33:44.400004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:44.999 [2024-05-16 20:33:44.400011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20360 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751c000 len:0x1000 key:0x3b200 00:24:44.999 [2024-05-16 20:33:44.400018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:44.999 [2024-05-16 20:33:44.400025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:20368 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751a000 len:0x1000 key:0x3b200 00:24:44.999 [2024-05-16 20:33:44.400031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:44.999 [2024-05-16 20:33:44.400039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:20376 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007518000 len:0x1000 key:0x3b200 00:24:44.999 [2024-05-16 20:33:44.400045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:44.999 [2024-05-16 20:33:44.400053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:20384 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007516000 len:0x1000 key:0x3b200 00:24:44.999 [2024-05-16 20:33:44.400059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:44.999 [2024-05-16 20:33:44.400068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:20392 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007514000 len:0x1000 key:0x3b200 00:24:44.999 [2024-05-16 20:33:44.400075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:44.999 [2024-05-16 20:33:44.400082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:20400 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007512000 len:0x1000 key:0x3b200 00:24:44.999 [2024-05-16 20:33:44.400088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.000 [2024-05-16 20:33:44.400096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:20408 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007510000 len:0x1000 key:0x3b200 00:24:45.000 [2024-05-16 20:33:44.400102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.000 [2024-05-16 20:33:44.400110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20416 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750e000 len:0x1000 key:0x3b200 00:24:45.000 [2024-05-16 20:33:44.400116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.000 [2024-05-16 20:33:44.400124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:20424 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750c000 len:0x1000 key:0x3b200 00:24:45.000 [2024-05-16 20:33:44.400131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.000 [2024-05-16 20:33:44.400141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:20432 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750a000 len:0x1000 key:0x3b200 00:24:45.000 [2024-05-16 20:33:44.400147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.000 [2024-05-16 20:33:44.400154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:20440 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007508000 len:0x1000 key:0x3b200 00:24:45.000 [2024-05-16 20:33:44.400160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.000 [2024-05-16 20:33:44.400168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:20448 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007506000 len:0x1000 key:0x3b200 00:24:45.000 [2024-05-16 20:33:44.400174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.000 [2024-05-16 20:33:44.400182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:20456 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007504000 len:0x1000 key:0x3b200 00:24:45.000 [2024-05-16 20:33:44.400188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.000 [2024-05-16 20:33:44.400195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20464 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007502000 len:0x1000 key:0x3b200 00:24:45.000 [2024-05-16 20:33:44.400201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.000 [2024-05-16 20:33:44.400209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:20472 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007500000 len:0x1000 key:0x3b200 00:24:45.000 [2024-05-16 20:33:44.400215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.000 [2024-05-16 20:33:44.400222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:20480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.000 [2024-05-16 20:33:44.400233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.000 [2024-05-16 20:33:44.400241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:20488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.000 [2024-05-16 20:33:44.400247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.000 [2024-05-16 20:33:44.408619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:20496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.000 [2024-05-16 20:33:44.408628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.000 [2024-05-16 20:33:44.408637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:20504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.000 [2024-05-16 20:33:44.408644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.000 [2024-05-16 20:33:44.408651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:20512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.000 [2024-05-16 20:33:44.408657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.000 [2024-05-16 20:33:44.408665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:20520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.000 [2024-05-16 20:33:44.408671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.000 [2024-05-16 20:33:44.408679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:20528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.000 [2024-05-16 20:33:44.408685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.000 [2024-05-16 20:33:44.408692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:20536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.000 [2024-05-16 20:33:44.408698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.000 [2024-05-16 20:33:44.408706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:20544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.000 [2024-05-16 20:33:44.408712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.000 [2024-05-16 20:33:44.408719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:20552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.000 [2024-05-16 20:33:44.408726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.000 [2024-05-16 20:33:44.408734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:20560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.000 [2024-05-16 20:33:44.408740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.000 [2024-05-16 20:33:44.408748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:20568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.000 [2024-05-16 20:33:44.408754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.000 [2024-05-16 20:33:44.408761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:20576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.000 [2024-05-16 20:33:44.408767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.000 [2024-05-16 20:33:44.408777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:20584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.000 [2024-05-16 20:33:44.408783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.000 [2024-05-16 20:33:44.410623] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:45.000 [2024-05-16 20:33:44.410634] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:45.000 [2024-05-16 20:33:44.410640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20592 len:8 PRP1 0x0 PRP2 0x0 00:24:45.000 [2024-05-16 20:33:44.410647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.000 [2024-05-16 20:33:44.410682] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2000192e4980 was disconnected and freed. reset controller. 00:24:45.000 [2024-05-16 20:33:44.410691] bdev_nvme.c:1867:bdev_nvme_failover_trid: *NOTICE*: Start failover from 192.168.100.8:4420 to 192.168.100.8:4421 00:24:45.000 [2024-05-16 20:33:44.410697] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:45.000 [2024-05-16 20:33:44.410728] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:45.000 [2024-05-16 20:33:44.410736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.000 [2024-05-16 20:33:44.410744] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:45.000 [2024-05-16 20:33:44.410750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.000 [2024-05-16 20:33:44.410757] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:45.000 [2024-05-16 20:33:44.410764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.000 [2024-05-16 20:33:44.410771] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:24:45.000 [2024-05-16 20:33:44.410777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.000 [2024-05-16 20:33:44.428774] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:24:45.000 [2024-05-16 20:33:44.428787] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:24:45.000 [2024-05-16 20:33:44.428794] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:45.000 [2024-05-16 20:33:44.431600] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:45.000 [2024-05-16 20:33:44.477476] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:45.000 [2024-05-16 20:33:47.836671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:98736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.000 [2024-05-16 20:33:47.836711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.000 [2024-05-16 20:33:47.836726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:98744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.000 [2024-05-16 20:33:47.836733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.000 [2024-05-16 20:33:47.836742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:98752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.000 [2024-05-16 20:33:47.836752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.000 [2024-05-16 20:33:47.836761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:98760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.000 [2024-05-16 20:33:47.836767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.000 [2024-05-16 20:33:47.836774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:98768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.000 [2024-05-16 20:33:47.836781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.001 [2024-05-16 20:33:47.836789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:98776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.001 [2024-05-16 20:33:47.836795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.001 [2024-05-16 20:33:47.836802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:98784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.001 [2024-05-16 20:33:47.836809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.001 [2024-05-16 20:33:47.836816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:98792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.001 [2024-05-16 20:33:47.836823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.001 [2024-05-16 20:33:47.836831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:98800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.001 [2024-05-16 20:33:47.836837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.001 [2024-05-16 20:33:47.836844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:98808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.001 [2024-05-16 20:33:47.836850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.001 [2024-05-16 20:33:47.836858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:98816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.001 [2024-05-16 20:33:47.836864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.001 [2024-05-16 20:33:47.836872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:98824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.001 [2024-05-16 20:33:47.836878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.001 [2024-05-16 20:33:47.836886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:98832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.001 [2024-05-16 20:33:47.836893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.001 [2024-05-16 20:33:47.836900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:98840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.001 [2024-05-16 20:33:47.836906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.001 [2024-05-16 20:33:47.836914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:98848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.001 [2024-05-16 20:33:47.836920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.001 [2024-05-16 20:33:47.836929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:98856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.001 [2024-05-16 20:33:47.836936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.001 [2024-05-16 20:33:47.836944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:98864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.001 [2024-05-16 20:33:47.836950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.001 [2024-05-16 20:33:47.836959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:98872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.001 [2024-05-16 20:33:47.836965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.001 [2024-05-16 20:33:47.836973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:98880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.001 [2024-05-16 20:33:47.836979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.001 [2024-05-16 20:33:47.836987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:98888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.001 [2024-05-16 20:33:47.836993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.001 [2024-05-16 20:33:47.837001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:98896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.001 [2024-05-16 20:33:47.837007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.001 [2024-05-16 20:33:47.837015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:98904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.001 [2024-05-16 20:33:47.837021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.001 [2024-05-16 20:33:47.837028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:98912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.001 [2024-05-16 20:33:47.837034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.001 [2024-05-16 20:33:47.837042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:98920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.001 [2024-05-16 20:33:47.837048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.001 [2024-05-16 20:33:47.837057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:98224 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752e000 len:0x1000 key:0x3b200 00:24:45.001 [2024-05-16 20:33:47.837063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.001 [2024-05-16 20:33:47.837072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:98232 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752c000 len:0x1000 key:0x3b200 00:24:45.001 [2024-05-16 20:33:47.837078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.001 [2024-05-16 20:33:47.837086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:98240 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752a000 len:0x1000 key:0x3b200 00:24:45.001 [2024-05-16 20:33:47.837092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.001 [2024-05-16 20:33:47.837100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:98248 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007528000 len:0x1000 key:0x3b200 00:24:45.001 [2024-05-16 20:33:47.837108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.001 [2024-05-16 20:33:47.837116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:98256 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007526000 len:0x1000 key:0x3b200 00:24:45.001 [2024-05-16 20:33:47.837123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.001 [2024-05-16 20:33:47.837130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:98264 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007524000 len:0x1000 key:0x3b200 00:24:45.001 [2024-05-16 20:33:47.837136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.001 [2024-05-16 20:33:47.837144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:98272 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007522000 len:0x1000 key:0x3b200 00:24:45.001 [2024-05-16 20:33:47.837150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.001 [2024-05-16 20:33:47.837158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:98280 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007520000 len:0x1000 key:0x3b200 00:24:45.001 [2024-05-16 20:33:47.837164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.001 [2024-05-16 20:33:47.837172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:98928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.001 [2024-05-16 20:33:47.837179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.001 [2024-05-16 20:33:47.837187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:98936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.001 [2024-05-16 20:33:47.837193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.001 [2024-05-16 20:33:47.837201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:98944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.001 [2024-05-16 20:33:47.837207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.001 [2024-05-16 20:33:47.837215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:98952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.001 [2024-05-16 20:33:47.837221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.002 [2024-05-16 20:33:47.837229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:98960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.002 [2024-05-16 20:33:47.837235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.002 [2024-05-16 20:33:47.837243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:98968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.002 [2024-05-16 20:33:47.837249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.002 [2024-05-16 20:33:47.837257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:98976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.002 [2024-05-16 20:33:47.837263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.002 [2024-05-16 20:33:47.837270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:98984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.002 [2024-05-16 20:33:47.837278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.002 [2024-05-16 20:33:47.837286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:98288 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007500000 len:0x1000 key:0x3b200 00:24:45.002 [2024-05-16 20:33:47.837292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.002 [2024-05-16 20:33:47.837300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:98296 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007502000 len:0x1000 key:0x3b200 00:24:45.002 [2024-05-16 20:33:47.837306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.002 [2024-05-16 20:33:47.837314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:98304 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007504000 len:0x1000 key:0x3b200 00:24:45.002 [2024-05-16 20:33:47.837320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.002 [2024-05-16 20:33:47.837329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:98312 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007506000 len:0x1000 key:0x3b200 00:24:45.002 [2024-05-16 20:33:47.837335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.002 [2024-05-16 20:33:47.837343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:98320 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007508000 len:0x1000 key:0x3b200 00:24:45.002 [2024-05-16 20:33:47.837350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.002 [2024-05-16 20:33:47.837358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:98328 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750a000 len:0x1000 key:0x3b200 00:24:45.002 [2024-05-16 20:33:47.837364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.002 [2024-05-16 20:33:47.837372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:98336 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750c000 len:0x1000 key:0x3b200 00:24:45.002 [2024-05-16 20:33:47.837378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.002 [2024-05-16 20:33:47.837387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:98344 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750e000 len:0x1000 key:0x3b200 00:24:45.002 [2024-05-16 20:33:47.837393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.002 [2024-05-16 20:33:47.837401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:98992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.002 [2024-05-16 20:33:47.837408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.002 [2024-05-16 20:33:47.837416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:99000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.002 [2024-05-16 20:33:47.837426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.002 [2024-05-16 20:33:47.837434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:99008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.002 [2024-05-16 20:33:47.837440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.002 [2024-05-16 20:33:47.837448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:99016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.002 [2024-05-16 20:33:47.837456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.002 [2024-05-16 20:33:47.837464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:99024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.002 [2024-05-16 20:33:47.837470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.002 [2024-05-16 20:33:47.837478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:99032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.002 [2024-05-16 20:33:47.837485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.002 [2024-05-16 20:33:47.837493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:99040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.002 [2024-05-16 20:33:47.837499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.002 [2024-05-16 20:33:47.837507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:99048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.002 [2024-05-16 20:33:47.837513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.002 [2024-05-16 20:33:47.837521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:99056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.002 [2024-05-16 20:33:47.837527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.002 [2024-05-16 20:33:47.837535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:99064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.002 [2024-05-16 20:33:47.837541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.002 [2024-05-16 20:33:47.837549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:99072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.002 [2024-05-16 20:33:47.837555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.002 [2024-05-16 20:33:47.837563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:99080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.002 [2024-05-16 20:33:47.837569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.002 [2024-05-16 20:33:47.837576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:99088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.002 [2024-05-16 20:33:47.837582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.002 [2024-05-16 20:33:47.837590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:99096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.002 [2024-05-16 20:33:47.837596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.002 [2024-05-16 20:33:47.837604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.002 [2024-05-16 20:33:47.837609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.002 [2024-05-16 20:33:47.837617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:99112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.002 [2024-05-16 20:33:47.837623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.002 [2024-05-16 20:33:47.837632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:98352 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753e000 len:0x1000 key:0x3b200 00:24:45.002 [2024-05-16 20:33:47.837641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.002 [2024-05-16 20:33:47.837649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:98360 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753c000 len:0x1000 key:0x3b200 00:24:45.002 [2024-05-16 20:33:47.837655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.002 [2024-05-16 20:33:47.837663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:98368 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753a000 len:0x1000 key:0x3b200 00:24:45.002 [2024-05-16 20:33:47.837670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.002 [2024-05-16 20:33:47.837678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:98376 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007538000 len:0x1000 key:0x3b200 00:24:45.002 [2024-05-16 20:33:47.837684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.002 [2024-05-16 20:33:47.837692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:98384 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007536000 len:0x1000 key:0x3b200 00:24:45.002 [2024-05-16 20:33:47.837698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.002 [2024-05-16 20:33:47.837706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:98392 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007534000 len:0x1000 key:0x3b200 00:24:45.002 [2024-05-16 20:33:47.837712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.002 [2024-05-16 20:33:47.837720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:98400 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007532000 len:0x1000 key:0x3b200 00:24:45.002 [2024-05-16 20:33:47.837726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.002 [2024-05-16 20:33:47.837734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:98408 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007530000 len:0x1000 key:0x3b200 00:24:45.002 [2024-05-16 20:33:47.837741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.002 [2024-05-16 20:33:47.837749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:98416 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007510000 len:0x1000 key:0x3b200 00:24:45.003 [2024-05-16 20:33:47.837755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.003 [2024-05-16 20:33:47.837763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:98424 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007512000 len:0x1000 key:0x3b200 00:24:45.003 [2024-05-16 20:33:47.837770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.003 [2024-05-16 20:33:47.837778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:98432 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007514000 len:0x1000 key:0x3b200 00:24:45.003 [2024-05-16 20:33:47.837784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.003 [2024-05-16 20:33:47.837792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:98440 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007516000 len:0x1000 key:0x3b200 00:24:45.003 [2024-05-16 20:33:47.837800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.003 [2024-05-16 20:33:47.837807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:98448 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007518000 len:0x1000 key:0x3b200 00:24:45.003 [2024-05-16 20:33:47.837814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.003 [2024-05-16 20:33:47.837821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:98456 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751a000 len:0x1000 key:0x3b200 00:24:45.003 [2024-05-16 20:33:47.837828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.003 [2024-05-16 20:33:47.837836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:98464 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751c000 len:0x1000 key:0x3b200 00:24:45.003 [2024-05-16 20:33:47.837842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.003 [2024-05-16 20:33:47.837850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:98472 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751e000 len:0x1000 key:0x3b200 00:24:45.003 [2024-05-16 20:33:47.837857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.003 [2024-05-16 20:33:47.837865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:98480 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007560000 len:0x1000 key:0x3b200 00:24:45.003 [2024-05-16 20:33:47.837872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.003 [2024-05-16 20:33:47.837880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:98488 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007562000 len:0x1000 key:0x3b200 00:24:45.003 [2024-05-16 20:33:47.837886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.003 [2024-05-16 20:33:47.837894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:98496 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007564000 len:0x1000 key:0x3b200 00:24:45.003 [2024-05-16 20:33:47.837901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.003 [2024-05-16 20:33:47.837909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:98504 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007566000 len:0x1000 key:0x3b200 00:24:45.003 [2024-05-16 20:33:47.837915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.003 [2024-05-16 20:33:47.837923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:98512 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007568000 len:0x1000 key:0x3b200 00:24:45.003 [2024-05-16 20:33:47.837930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.003 [2024-05-16 20:33:47.837937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:98520 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756a000 len:0x1000 key:0x3b200 00:24:45.003 [2024-05-16 20:33:47.837944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.003 [2024-05-16 20:33:47.837952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:98528 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756c000 len:0x1000 key:0x3b200 00:24:45.003 [2024-05-16 20:33:47.837958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.003 [2024-05-16 20:33:47.837968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:98536 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756e000 len:0x1000 key:0x3b200 00:24:45.003 [2024-05-16 20:33:47.837974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.003 [2024-05-16 20:33:47.837982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:99120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.003 [2024-05-16 20:33:47.837988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.003 [2024-05-16 20:33:47.837996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:99128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.003 [2024-05-16 20:33:47.838002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.003 [2024-05-16 20:33:47.838011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:99136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.003 [2024-05-16 20:33:47.838017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.003 [2024-05-16 20:33:47.838025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:99144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.003 [2024-05-16 20:33:47.838031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.003 [2024-05-16 20:33:47.838038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:99152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.003 [2024-05-16 20:33:47.838044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.003 [2024-05-16 20:33:47.838052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:99160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.003 [2024-05-16 20:33:47.838059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.003 [2024-05-16 20:33:47.838067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:99168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.003 [2024-05-16 20:33:47.838074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.003 [2024-05-16 20:33:47.838081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:99176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.003 [2024-05-16 20:33:47.838088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.003 [2024-05-16 20:33:47.838095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:98544 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759e000 len:0x1000 key:0x3b200 00:24:45.003 [2024-05-16 20:33:47.838102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.003 [2024-05-16 20:33:47.838109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:98552 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759c000 len:0x1000 key:0x3b200 00:24:45.003 [2024-05-16 20:33:47.838116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.003 [2024-05-16 20:33:47.838124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:98560 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759a000 len:0x1000 key:0x3b200 00:24:45.003 [2024-05-16 20:33:47.838130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.003 [2024-05-16 20:33:47.838139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:98568 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007598000 len:0x1000 key:0x3b200 00:24:45.003 [2024-05-16 20:33:47.838146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.003 [2024-05-16 20:33:47.838153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:98576 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007596000 len:0x1000 key:0x3b200 00:24:45.003 [2024-05-16 20:33:47.838160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.003 [2024-05-16 20:33:47.838167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:98584 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007594000 len:0x1000 key:0x3b200 00:24:45.003 [2024-05-16 20:33:47.838174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.003 [2024-05-16 20:33:47.838182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:98592 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007592000 len:0x1000 key:0x3b200 00:24:45.003 [2024-05-16 20:33:47.838188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.003 [2024-05-16 20:33:47.838196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:98600 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007590000 len:0x1000 key:0x3b200 00:24:45.003 [2024-05-16 20:33:47.838202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.003 [2024-05-16 20:33:47.838210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:98608 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754e000 len:0x1000 key:0x3b200 00:24:45.003 [2024-05-16 20:33:47.838216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.003 [2024-05-16 20:33:47.838224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:98616 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754c000 len:0x1000 key:0x3b200 00:24:45.003 [2024-05-16 20:33:47.838230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.003 [2024-05-16 20:33:47.838242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:98624 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754a000 len:0x1000 key:0x3b200 00:24:45.003 [2024-05-16 20:33:47.838249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.003 [2024-05-16 20:33:47.838257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:98632 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007548000 len:0x1000 key:0x3b200 00:24:45.003 [2024-05-16 20:33:47.838263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.003 [2024-05-16 20:33:47.838271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:98640 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007546000 len:0x1000 key:0x3b200 00:24:45.003 [2024-05-16 20:33:47.838277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.004 [2024-05-16 20:33:47.838285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:98648 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007544000 len:0x1000 key:0x3b200 00:24:45.004 [2024-05-16 20:33:47.838292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.004 [2024-05-16 20:33:47.838300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:98656 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007542000 len:0x1000 key:0x3b200 00:24:45.004 [2024-05-16 20:33:47.838311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.004 [2024-05-16 20:33:47.838319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:98664 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007540000 len:0x1000 key:0x3b200 00:24:45.004 [2024-05-16 20:33:47.838325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.004 [2024-05-16 20:33:47.838333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:99184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.004 [2024-05-16 20:33:47.838339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.004 [2024-05-16 20:33:47.838347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:99192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.004 [2024-05-16 20:33:47.838353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.004 [2024-05-16 20:33:47.838361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:99200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.004 [2024-05-16 20:33:47.838367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.004 [2024-05-16 20:33:47.838375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:99208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.004 [2024-05-16 20:33:47.838382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.004 [2024-05-16 20:33:47.838390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:99216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.004 [2024-05-16 20:33:47.838396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.004 [2024-05-16 20:33:47.838404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:99224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.004 [2024-05-16 20:33:47.838410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.004 [2024-05-16 20:33:47.838417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:99232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.004 [2024-05-16 20:33:47.838427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.004 [2024-05-16 20:33:47.838435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:99240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.004 [2024-05-16 20:33:47.838442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.004 [2024-05-16 20:33:47.838450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:98672 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758e000 len:0x1000 key:0x3b200 00:24:45.004 [2024-05-16 20:33:47.838457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.004 [2024-05-16 20:33:47.838465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:98680 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758c000 len:0x1000 key:0x3b200 00:24:45.004 [2024-05-16 20:33:47.838471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.004 [2024-05-16 20:33:47.838479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:98688 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758a000 len:0x1000 key:0x3b200 00:24:45.004 [2024-05-16 20:33:47.838488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.004 [2024-05-16 20:33:47.838496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:98696 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007588000 len:0x1000 key:0x3b200 00:24:45.004 [2024-05-16 20:33:47.838503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.004 [2024-05-16 20:33:47.838511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:98704 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007586000 len:0x1000 key:0x3b200 00:24:45.004 [2024-05-16 20:33:47.838517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.004 [2024-05-16 20:33:47.838525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:98712 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007584000 len:0x1000 key:0x3b200 00:24:45.004 [2024-05-16 20:33:47.838531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.004 [2024-05-16 20:33:47.838539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:98720 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007582000 len:0x1000 key:0x3b200 00:24:45.004 [2024-05-16 20:33:47.838545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.004 [2024-05-16 20:33:47.850168] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:45.004 [2024-05-16 20:33:47.850182] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:45.004 [2024-05-16 20:33:47.850189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98728 len:8 PRP1 0x0 PRP2 0x0 00:24:45.004 [2024-05-16 20:33:47.850196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.004 [2024-05-16 20:33:47.850231] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2000192e48c0 was disconnected and freed. reset controller. 00:24:45.004 [2024-05-16 20:33:47.850240] bdev_nvme.c:1867:bdev_nvme_failover_trid: *NOTICE*: Start failover from 192.168.100.8:4421 to 192.168.100.8:4422 00:24:45.004 [2024-05-16 20:33:47.850247] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:45.004 [2024-05-16 20:33:47.850274] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:45.004 [2024-05-16 20:33:47.850282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.004 [2024-05-16 20:33:47.850290] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:45.004 [2024-05-16 20:33:47.850297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.004 [2024-05-16 20:33:47.850304] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:45.004 [2024-05-16 20:33:47.850310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.004 [2024-05-16 20:33:47.850316] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:24:45.004 [2024-05-16 20:33:47.850322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.004 [2024-05-16 20:33:47.868436] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:24:45.004 [2024-05-16 20:33:47.868455] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:24:45.004 [2024-05-16 20:33:47.868464] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:45.004 [2024-05-16 20:33:47.871259] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:45.004 [2024-05-16 20:33:47.912920] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:45.004 [2024-05-16 20:33:52.215554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:56392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.004 [2024-05-16 20:33:52.215596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.004 [2024-05-16 20:33:52.215612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:56400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.004 [2024-05-16 20:33:52.215620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.004 [2024-05-16 20:33:52.215628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:56408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.004 [2024-05-16 20:33:52.215635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.004 [2024-05-16 20:33:52.215643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:56416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.004 [2024-05-16 20:33:52.215649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.004 [2024-05-16 20:33:52.215657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:55784 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759e000 len:0x1000 key:0x3b200 00:24:45.004 [2024-05-16 20:33:52.215664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.004 [2024-05-16 20:33:52.215672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:55792 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759c000 len:0x1000 key:0x3b200 00:24:45.004 [2024-05-16 20:33:52.215678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.004 [2024-05-16 20:33:52.215686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:55800 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759a000 len:0x1000 key:0x3b200 00:24:45.004 [2024-05-16 20:33:52.215693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.004 [2024-05-16 20:33:52.215701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:55808 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007586000 len:0x1000 key:0x3b200 00:24:45.004 [2024-05-16 20:33:52.215707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.004 [2024-05-16 20:33:52.215715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:55816 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007588000 len:0x1000 key:0x3b200 00:24:45.004 [2024-05-16 20:33:52.215721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.004 [2024-05-16 20:33:52.215730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:55824 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758a000 len:0x1000 key:0x3b200 00:24:45.004 [2024-05-16 20:33:52.215736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.004 [2024-05-16 20:33:52.215744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:55832 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758c000 len:0x1000 key:0x3b200 00:24:45.005 [2024-05-16 20:33:52.215751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.005 [2024-05-16 20:33:52.215763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:55840 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758e000 len:0x1000 key:0x3b200 00:24:45.005 [2024-05-16 20:33:52.215770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.005 [2024-05-16 20:33:52.215778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:55848 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007520000 len:0x1000 key:0x3b200 00:24:45.005 [2024-05-16 20:33:52.215784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.005 [2024-05-16 20:33:52.215792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:55856 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007522000 len:0x1000 key:0x3b200 00:24:45.005 [2024-05-16 20:33:52.215799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.005 [2024-05-16 20:33:52.215807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:55864 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007524000 len:0x1000 key:0x3b200 00:24:45.005 [2024-05-16 20:33:52.215814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.005 [2024-05-16 20:33:52.215823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:55872 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007526000 len:0x1000 key:0x3b200 00:24:45.005 [2024-05-16 20:33:52.215829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.005 [2024-05-16 20:33:52.215837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:55880 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007528000 len:0x1000 key:0x3b200 00:24:45.005 [2024-05-16 20:33:52.215843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.005 [2024-05-16 20:33:52.215851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:55888 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752a000 len:0x1000 key:0x3b200 00:24:45.005 [2024-05-16 20:33:52.215857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.005 [2024-05-16 20:33:52.215865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:55896 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752c000 len:0x1000 key:0x3b200 00:24:45.005 [2024-05-16 20:33:52.215871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.005 [2024-05-16 20:33:52.215879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:55904 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752e000 len:0x1000 key:0x3b200 00:24:45.005 [2024-05-16 20:33:52.215885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.005 [2024-05-16 20:33:52.215893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:56424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.005 [2024-05-16 20:33:52.215900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.005 [2024-05-16 20:33:52.215908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:56432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.005 [2024-05-16 20:33:52.215914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.005 [2024-05-16 20:33:52.215921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:56440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.005 [2024-05-16 20:33:52.215929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.005 [2024-05-16 20:33:52.215938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:56448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.005 [2024-05-16 20:33:52.215944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.005 [2024-05-16 20:33:52.215952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:56456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.005 [2024-05-16 20:33:52.215958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.005 [2024-05-16 20:33:52.215967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:56464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.005 [2024-05-16 20:33:52.215973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.005 [2024-05-16 20:33:52.215981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:56472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.005 [2024-05-16 20:33:52.215987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.005 [2024-05-16 20:33:52.215995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:56480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.005 [2024-05-16 20:33:52.216001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.005 [2024-05-16 20:33:52.216009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:56488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.005 [2024-05-16 20:33:52.216015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.005 [2024-05-16 20:33:52.216023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:56496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.005 [2024-05-16 20:33:52.216029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.005 [2024-05-16 20:33:52.216037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:56504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.005 [2024-05-16 20:33:52.216043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.005 [2024-05-16 20:33:52.216050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:56512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.005 [2024-05-16 20:33:52.216057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.005 [2024-05-16 20:33:52.216064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:56520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.005 [2024-05-16 20:33:52.216070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.005 [2024-05-16 20:33:52.216078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:56528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.005 [2024-05-16 20:33:52.216084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.005 [2024-05-16 20:33:52.216092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:56536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.005 [2024-05-16 20:33:52.216099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.005 [2024-05-16 20:33:52.216108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:56544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.005 [2024-05-16 20:33:52.216114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.005 [2024-05-16 20:33:52.216122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:56552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.005 [2024-05-16 20:33:52.216128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.005 [2024-05-16 20:33:52.216135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:56560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.005 [2024-05-16 20:33:52.216142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.005 [2024-05-16 20:33:52.216149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:56568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.005 [2024-05-16 20:33:52.216155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.005 [2024-05-16 20:33:52.216163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:56576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.005 [2024-05-16 20:33:52.216169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.005 [2024-05-16 20:33:52.216176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:56584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.005 [2024-05-16 20:33:52.216182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.005 [2024-05-16 20:33:52.216190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:56592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.005 [2024-05-16 20:33:52.216196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.005 [2024-05-16 20:33:52.216203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:56600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.005 [2024-05-16 20:33:52.216209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.005 [2024-05-16 20:33:52.216217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:56608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.005 [2024-05-16 20:33:52.216223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.005 [2024-05-16 20:33:52.216231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:56616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.005 [2024-05-16 20:33:52.216237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.005 [2024-05-16 20:33:52.216245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:56624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.005 [2024-05-16 20:33:52.216251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.005 [2024-05-16 20:33:52.216259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:56632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.005 [2024-05-16 20:33:52.216265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.005 [2024-05-16 20:33:52.216272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:56640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.005 [2024-05-16 20:33:52.216280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.005 [2024-05-16 20:33:52.216287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:56648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.006 [2024-05-16 20:33:52.216293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.006 [2024-05-16 20:33:52.216301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:56656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.006 [2024-05-16 20:33:52.216307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.006 [2024-05-16 20:33:52.216314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:56664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.006 [2024-05-16 20:33:52.216321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.006 [2024-05-16 20:33:52.216328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:56672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.006 [2024-05-16 20:33:52.216334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.006 [2024-05-16 20:33:52.216342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:56680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.006 [2024-05-16 20:33:52.216348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.006 [2024-05-16 20:33:52.216355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:56688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.006 [2024-05-16 20:33:52.216361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.006 [2024-05-16 20:33:52.216369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:56696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.006 [2024-05-16 20:33:52.216375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.006 [2024-05-16 20:33:52.216382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:56704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.006 [2024-05-16 20:33:52.216389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.006 [2024-05-16 20:33:52.216396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:56712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.006 [2024-05-16 20:33:52.216402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.006 [2024-05-16 20:33:52.216410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:56720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.006 [2024-05-16 20:33:52.216416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.006 [2024-05-16 20:33:52.216426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:56728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.006 [2024-05-16 20:33:52.216432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.006 [2024-05-16 20:33:52.216440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:56736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.006 [2024-05-16 20:33:52.216446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.006 [2024-05-16 20:33:52.216455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:55912 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ce000 len:0x1000 key:0x3b200 00:24:45.006 [2024-05-16 20:33:52.216461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.006 [2024-05-16 20:33:52.216469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:55920 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075cc000 len:0x1000 key:0x3b200 00:24:45.006 [2024-05-16 20:33:52.216475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.006 [2024-05-16 20:33:52.216483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:55928 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ca000 len:0x1000 key:0x3b200 00:24:45.006 [2024-05-16 20:33:52.216489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.006 [2024-05-16 20:33:52.216497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:55936 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c8000 len:0x1000 key:0x3b200 00:24:45.006 [2024-05-16 20:33:52.216503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.006 [2024-05-16 20:33:52.216511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:55944 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c6000 len:0x1000 key:0x3b200 00:24:45.006 [2024-05-16 20:33:52.216517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.006 [2024-05-16 20:33:52.216525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:55952 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c4000 len:0x1000 key:0x3b200 00:24:45.006 [2024-05-16 20:33:52.216531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.006 [2024-05-16 20:33:52.216539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:55960 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c2000 len:0x1000 key:0x3b200 00:24:45.006 [2024-05-16 20:33:52.216545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.006 [2024-05-16 20:33:52.216553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:55968 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c0000 len:0x1000 key:0x3b200 00:24:45.006 [2024-05-16 20:33:52.216560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.006 [2024-05-16 20:33:52.216568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:55976 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075be000 len:0x1000 key:0x3b200 00:24:45.006 [2024-05-16 20:33:52.216574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.006 [2024-05-16 20:33:52.216582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:55984 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075bc000 len:0x1000 key:0x3b200 00:24:45.006 [2024-05-16 20:33:52.216588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.006 [2024-05-16 20:33:52.216595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:55992 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ba000 len:0x1000 key:0x3b200 00:24:45.006 [2024-05-16 20:33:52.216601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.006 [2024-05-16 20:33:52.216609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:56000 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b8000 len:0x1000 key:0x3b200 00:24:45.006 [2024-05-16 20:33:52.216616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.006 [2024-05-16 20:33:52.216624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:56008 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b6000 len:0x1000 key:0x3b200 00:24:45.006 [2024-05-16 20:33:52.216630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.006 [2024-05-16 20:33:52.216638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:56016 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b4000 len:0x1000 key:0x3b200 00:24:45.006 [2024-05-16 20:33:52.216644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.006 [2024-05-16 20:33:52.216652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:56024 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b2000 len:0x1000 key:0x3b200 00:24:45.006 [2024-05-16 20:33:52.216658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.006 [2024-05-16 20:33:52.216666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:56032 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b0000 len:0x1000 key:0x3b200 00:24:45.006 [2024-05-16 20:33:52.216672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.006 [2024-05-16 20:33:52.216680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:56040 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ae000 len:0x1000 key:0x3b200 00:24:45.006 [2024-05-16 20:33:52.216686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.007 [2024-05-16 20:33:52.216694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:56048 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ac000 len:0x1000 key:0x3b200 00:24:45.007 [2024-05-16 20:33:52.216700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.007 [2024-05-16 20:33:52.216708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:56056 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075aa000 len:0x1000 key:0x3b200 00:24:45.007 [2024-05-16 20:33:52.216714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.007 [2024-05-16 20:33:52.216722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:56064 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a8000 len:0x1000 key:0x3b200 00:24:45.007 [2024-05-16 20:33:52.216728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.007 [2024-05-16 20:33:52.216736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:56072 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a6000 len:0x1000 key:0x3b200 00:24:45.007 [2024-05-16 20:33:52.216742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.007 [2024-05-16 20:33:52.216750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:56080 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756a000 len:0x1000 key:0x3b200 00:24:45.007 [2024-05-16 20:33:52.216756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.007 [2024-05-16 20:33:52.216763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:56088 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756c000 len:0x1000 key:0x3b200 00:24:45.007 [2024-05-16 20:33:52.216770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.007 [2024-05-16 20:33:52.216779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:56096 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756e000 len:0x1000 key:0x3b200 00:24:45.007 [2024-05-16 20:33:52.216786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.007 [2024-05-16 20:33:52.216793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:56744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.007 [2024-05-16 20:33:52.216800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.007 [2024-05-16 20:33:52.216808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:56752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.007 [2024-05-16 20:33:52.216814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.007 [2024-05-16 20:33:52.216821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:56760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.007 [2024-05-16 20:33:52.216827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.007 [2024-05-16 20:33:52.216835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:56104 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750a000 len:0x1000 key:0x3b200 00:24:45.007 [2024-05-16 20:33:52.216841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.007 [2024-05-16 20:33:52.216849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:56112 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750c000 len:0x1000 key:0x3b200 00:24:45.007 [2024-05-16 20:33:52.216855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.007 [2024-05-16 20:33:52.216863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:56120 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750e000 len:0x1000 key:0x3b200 00:24:45.007 [2024-05-16 20:33:52.216869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.007 [2024-05-16 20:33:52.216877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:56128 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007580000 len:0x1000 key:0x3b200 00:24:45.007 [2024-05-16 20:33:52.216883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.007 [2024-05-16 20:33:52.216891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:56136 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007582000 len:0x1000 key:0x3b200 00:24:45.007 [2024-05-16 20:33:52.216898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.007 [2024-05-16 20:33:52.216905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:56144 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007584000 len:0x1000 key:0x3b200 00:24:45.007 [2024-05-16 20:33:52.216912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.007 [2024-05-16 20:33:52.216919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:56152 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007556000 len:0x1000 key:0x3b200 00:24:45.007 [2024-05-16 20:33:52.216925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.007 [2024-05-16 20:33:52.216933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:56160 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007558000 len:0x1000 key:0x3b200 00:24:45.007 [2024-05-16 20:33:52.216939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.007 [2024-05-16 20:33:52.216949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:56168 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755a000 len:0x1000 key:0x3b200 00:24:45.007 [2024-05-16 20:33:52.216955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.007 [2024-05-16 20:33:52.216963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:56176 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755c000 len:0x1000 key:0x3b200 00:24:45.007 [2024-05-16 20:33:52.216969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.007 [2024-05-16 20:33:52.216976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:56184 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755e000 len:0x1000 key:0x3b200 00:24:45.007 [2024-05-16 20:33:52.216982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.007 [2024-05-16 20:33:52.216990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:56192 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007540000 len:0x1000 key:0x3b200 00:24:45.007 [2024-05-16 20:33:52.216997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.007 [2024-05-16 20:33:52.217005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:56200 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007542000 len:0x1000 key:0x3b200 00:24:45.007 [2024-05-16 20:33:52.217011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.007 [2024-05-16 20:33:52.217019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:56208 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007544000 len:0x1000 key:0x3b200 00:24:45.007 [2024-05-16 20:33:52.217025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.007 [2024-05-16 20:33:52.217033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:56216 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007546000 len:0x1000 key:0x3b200 00:24:45.007 [2024-05-16 20:33:52.217039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.007 [2024-05-16 20:33:52.217046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:56224 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007548000 len:0x1000 key:0x3b200 00:24:45.007 [2024-05-16 20:33:52.217052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.007 [2024-05-16 20:33:52.217060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:56232 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754a000 len:0x1000 key:0x3b200 00:24:45.007 [2024-05-16 20:33:52.217067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.007 [2024-05-16 20:33:52.217074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:56240 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754c000 len:0x1000 key:0x3b200 00:24:45.007 [2024-05-16 20:33:52.217080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.007 [2024-05-16 20:33:52.217088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:56248 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754e000 len:0x1000 key:0x3b200 00:24:45.007 [2024-05-16 20:33:52.217094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.007 [2024-05-16 20:33:52.217102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:56256 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757e000 len:0x1000 key:0x3b200 00:24:45.007 [2024-05-16 20:33:52.217112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.007 [2024-05-16 20:33:52.217120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:56264 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757c000 len:0x1000 key:0x3b200 00:24:45.007 [2024-05-16 20:33:52.217127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.007 [2024-05-16 20:33:52.217134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:56272 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757a000 len:0x1000 key:0x3b200 00:24:45.007 [2024-05-16 20:33:52.217140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.007 [2024-05-16 20:33:52.217148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:56280 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007578000 len:0x1000 key:0x3b200 00:24:45.007 [2024-05-16 20:33:52.217154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.007 [2024-05-16 20:33:52.217162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:56288 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007576000 len:0x1000 key:0x3b200 00:24:45.007 [2024-05-16 20:33:52.217168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.007 [2024-05-16 20:33:52.217176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:56296 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007574000 len:0x1000 key:0x3b200 00:24:45.007 [2024-05-16 20:33:52.217182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.007 [2024-05-16 20:33:52.217190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:56304 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007572000 len:0x1000 key:0x3b200 00:24:45.007 [2024-05-16 20:33:52.217197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.008 [2024-05-16 20:33:52.217204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:56312 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007570000 len:0x1000 key:0x3b200 00:24:45.008 [2024-05-16 20:33:52.217211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.008 [2024-05-16 20:33:52.217223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:56320 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007500000 len:0x1000 key:0x3b200 00:24:45.008 [2024-05-16 20:33:52.217230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.008 [2024-05-16 20:33:52.217238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:56328 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007502000 len:0x1000 key:0x3b200 00:24:45.008 [2024-05-16 20:33:52.217244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.008 [2024-05-16 20:33:52.217252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:56336 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007504000 len:0x1000 key:0x3b200 00:24:45.008 [2024-05-16 20:33:52.217258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.008 [2024-05-16 20:33:52.217266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:56344 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007506000 len:0x1000 key:0x3b200 00:24:45.008 [2024-05-16 20:33:52.217274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.008 [2024-05-16 20:33:52.217283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:56352 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007508000 len:0x1000 key:0x3b200 00:24:45.008 [2024-05-16 20:33:52.217290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.008 [2024-05-16 20:33:52.217298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:56768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.008 [2024-05-16 20:33:52.217304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.008 [2024-05-16 20:33:52.217312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:56776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.008 [2024-05-16 20:33:52.217318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.008 [2024-05-16 20:33:52.217326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:56784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.008 [2024-05-16 20:33:52.217332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.008 [2024-05-16 20:33:52.217340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:56792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.008 [2024-05-16 20:33:52.217346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.008 [2024-05-16 20:33:52.217353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:56800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.008 [2024-05-16 20:33:52.217359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.008 [2024-05-16 20:33:52.217367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:56360 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007590000 len:0x1000 key:0x3b200 00:24:45.008 [2024-05-16 20:33:52.217373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.008 [2024-05-16 20:33:52.217381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:56368 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007592000 len:0x1000 key:0x3b200 00:24:45.008 [2024-05-16 20:33:52.217387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.008 [2024-05-16 20:33:52.217395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:56376 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007594000 len:0x1000 key:0x3b200 00:24:45.008 [2024-05-16 20:33:52.217402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:24:45.008 [2024-05-16 20:33:52.219472] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:45.008 [2024-05-16 20:33:52.219483] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:45.008 [2024-05-16 20:33:52.219489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:56384 len:8 PRP1 0x0 PRP2 0x0 00:24:45.008 [2024-05-16 20:33:52.219496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.008 [2024-05-16 20:33:52.219532] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2000192e48c0 was disconnected and freed. reset controller. 00:24:45.008 [2024-05-16 20:33:52.219541] bdev_nvme.c:1867:bdev_nvme_failover_trid: *NOTICE*: Start failover from 192.168.100.8:4422 to 192.168.100.8:4420 00:24:45.008 [2024-05-16 20:33:52.219548] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:45.008 [2024-05-16 20:33:52.222362] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:45.008 [2024-05-16 20:33:52.241466] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:24:45.008 [2024-05-16 20:33:52.287216] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:45.008 00:24:45.008 Latency(us) 00:24:45.008 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:45.008 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:24:45.008 Verification LBA range: start 0x0 length 0x4000 00:24:45.008 NVMe0n1 : 15.00 13953.58 54.51 313.84 0.00 8947.40 358.89 1046578.71 00:24:45.008 =================================================================================================================== 00:24:45.008 Total : 13953.58 54.51 313.84 0.00 8947.40 358.89 1046578.71 00:24:45.008 Received shutdown signal, test time was about 15.000000 seconds 00:24:45.008 00:24:45.008 Latency(us) 00:24:45.008 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:45.008 =================================================================================================================== 00:24:45.008 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:45.008 20:33:57 nvmf_rdma.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:24:45.008 20:33:57 nvmf_rdma.nvmf_failover -- host/failover.sh@65 -- # count=3 00:24:45.008 20:33:57 nvmf_rdma.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:24:45.008 20:33:57 nvmf_rdma.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=3169373 00:24:45.008 20:33:57 nvmf_rdma.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:24:45.008 20:33:57 nvmf_rdma.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 3169373 /var/tmp/bdevperf.sock 00:24:45.008 20:33:57 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@827 -- # '[' -z 3169373 ']' 00:24:45.008 20:33:57 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:45.008 20:33:57 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:45.008 20:33:57 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:45.008 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:45.008 20:33:57 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:45.008 20:33:57 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:45.577 20:33:58 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:45.577 20:33:58 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@860 -- # return 0 00:24:45.577 20:33:58 nvmf_rdma.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:24:45.835 [2024-05-16 20:33:58.649385] rdma.c:3032:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:24:45.835 20:33:58 nvmf_rdma.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4422 00:24:45.835 [2024-05-16 20:33:58.817959] rdma.c:3032:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4422 *** 00:24:46.093 20:33:58 nvmf_rdma.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:46.093 NVMe0n1 00:24:46.351 20:33:59 nvmf_rdma.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:46.351 00:24:46.351 20:33:59 nvmf_rdma.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:46.609 00:24:46.609 20:33:59 nvmf_rdma.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:46.609 20:33:59 nvmf_rdma.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:24:46.868 20:33:59 nvmf_rdma.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:47.169 20:33:59 nvmf_rdma.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:24:50.482 20:34:02 nvmf_rdma.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:50.482 20:34:02 nvmf_rdma.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:24:50.482 20:34:03 nvmf_rdma.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=3170297 00:24:50.482 20:34:03 nvmf_rdma.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:50.482 20:34:03 nvmf_rdma.nvmf_failover -- host/failover.sh@92 -- # wait 3170297 00:24:51.418 0 00:24:51.418 20:34:04 nvmf_rdma.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:51.418 [2024-05-16 20:33:57.672392] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:24:51.418 [2024-05-16 20:33:57.672450] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3169373 ] 00:24:51.418 EAL: No free 2048 kB hugepages reported on node 1 00:24:51.418 [2024-05-16 20:33:57.731843] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:51.418 [2024-05-16 20:33:57.801480] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:51.418 [2024-05-16 20:33:59.897900] bdev_nvme.c:1867:bdev_nvme_failover_trid: *NOTICE*: Start failover from 192.168.100.8:4420 to 192.168.100.8:4421 00:24:51.418 [2024-05-16 20:33:59.898579] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:51.418 [2024-05-16 20:33:59.898614] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:51.418 [2024-05-16 20:33:59.915687] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:24:51.418 [2024-05-16 20:33:59.931600] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:51.418 Running I/O for 1 seconds... 00:24:51.418 00:24:51.418 Latency(us) 00:24:51.418 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:51.418 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:24:51.418 Verification LBA range: start 0x0 length 0x4000 00:24:51.418 NVMe0n1 : 1.00 17598.90 68.75 0.00 0.00 7231.43 187.25 13169.62 00:24:51.418 =================================================================================================================== 00:24:51.418 Total : 17598.90 68.75 0.00 0.00 7231.43 187.25 13169.62 00:24:51.418 20:34:04 nvmf_rdma.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:51.418 20:34:04 nvmf_rdma.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:24:51.676 20:34:04 nvmf_rdma.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t rdma -a 192.168.100.8 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:51.676 20:34:04 nvmf_rdma.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:51.676 20:34:04 nvmf_rdma.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:24:51.935 20:34:04 nvmf_rdma.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t rdma -a 192.168.100.8 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:52.193 20:34:04 nvmf_rdma.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:24:55.522 20:34:07 nvmf_rdma.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:55.522 20:34:07 nvmf_rdma.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:24:55.522 20:34:08 nvmf_rdma.nvmf_failover -- host/failover.sh@108 -- # killprocess 3169373 00:24:55.522 20:34:08 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@946 -- # '[' -z 3169373 ']' 00:24:55.522 20:34:08 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@950 -- # kill -0 3169373 00:24:55.522 20:34:08 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@951 -- # uname 00:24:55.522 20:34:08 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:24:55.522 20:34:08 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3169373 00:24:55.522 20:34:08 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:24:55.522 20:34:08 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:24:55.522 20:34:08 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3169373' 00:24:55.522 killing process with pid 3169373 00:24:55.522 20:34:08 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@965 -- # kill 3169373 00:24:55.522 20:34:08 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@970 -- # wait 3169373 00:24:55.522 20:34:08 nvmf_rdma.nvmf_failover -- host/failover.sh@110 -- # sync 00:24:55.522 20:34:08 nvmf_rdma.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:55.780 20:34:08 nvmf_rdma.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:24:55.780 20:34:08 nvmf_rdma.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:55.780 20:34:08 nvmf_rdma.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:24:55.780 20:34:08 nvmf_rdma.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:55.780 20:34:08 nvmf_rdma.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:24:55.780 20:34:08 nvmf_rdma.nvmf_failover -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:24:55.780 20:34:08 nvmf_rdma.nvmf_failover -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:24:55.780 20:34:08 nvmf_rdma.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:24:55.780 20:34:08 nvmf_rdma.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:55.780 20:34:08 nvmf_rdma.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:24:55.780 rmmod nvme_rdma 00:24:55.780 rmmod nvme_fabrics 00:24:55.780 20:34:08 nvmf_rdma.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:55.780 20:34:08 nvmf_rdma.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:24:55.780 20:34:08 nvmf_rdma.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:24:55.780 20:34:08 nvmf_rdma.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 3166312 ']' 00:24:55.780 20:34:08 nvmf_rdma.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 3166312 00:24:55.780 20:34:08 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@946 -- # '[' -z 3166312 ']' 00:24:55.780 20:34:08 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@950 -- # kill -0 3166312 00:24:55.780 20:34:08 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@951 -- # uname 00:24:55.780 20:34:08 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:24:55.780 20:34:08 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3166312 00:24:55.780 20:34:08 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:24:55.780 20:34:08 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:24:55.780 20:34:08 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3166312' 00:24:55.780 killing process with pid 3166312 00:24:55.780 20:34:08 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@965 -- # kill 3166312 00:24:55.780 [2024-05-16 20:34:08.690474] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:24:55.780 20:34:08 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@970 -- # wait 3166312 00:24:55.780 [2024-05-16 20:34:08.758365] rdma.c:2885:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:24:56.038 20:34:08 nvmf_rdma.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:56.038 20:34:08 nvmf_rdma.nvmf_failover -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:24:56.038 00:24:56.038 real 0m36.175s 00:24:56.038 user 2m3.348s 00:24:56.038 sys 0m6.270s 00:24:56.038 20:34:08 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@1122 -- # xtrace_disable 00:24:56.038 20:34:08 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:56.038 ************************************ 00:24:56.038 END TEST nvmf_failover 00:24:56.038 ************************************ 00:24:56.038 20:34:08 nvmf_rdma -- nvmf/nvmf.sh@100 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=rdma 00:24:56.038 20:34:08 nvmf_rdma -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:24:56.038 20:34:08 nvmf_rdma -- common/autotest_common.sh@1103 -- # xtrace_disable 00:24:56.038 20:34:08 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:24:56.038 ************************************ 00:24:56.038 START TEST nvmf_host_discovery 00:24:56.038 ************************************ 00:24:56.038 20:34:09 nvmf_rdma.nvmf_host_discovery -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=rdma 00:24:56.295 * Looking for test storage... 00:24:56.295 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:24:56.295 20:34:09 nvmf_rdma.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:24:56.295 20:34:09 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:24:56.296 20:34:09 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:56.296 20:34:09 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:56.296 20:34:09 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:56.296 20:34:09 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:56.296 20:34:09 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:56.296 20:34:09 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:56.296 20:34:09 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:56.296 20:34:09 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:56.296 20:34:09 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:56.296 20:34:09 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:56.296 20:34:09 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:24:56.296 20:34:09 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:24:56.296 20:34:09 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:56.296 20:34:09 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:56.296 20:34:09 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:56.296 20:34:09 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:56.296 20:34:09 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:24:56.296 20:34:09 nvmf_rdma.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:56.296 20:34:09 nvmf_rdma.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:56.296 20:34:09 nvmf_rdma.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:56.296 20:34:09 nvmf_rdma.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:56.296 20:34:09 nvmf_rdma.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:56.296 20:34:09 nvmf_rdma.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:56.296 20:34:09 nvmf_rdma.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:24:56.296 20:34:09 nvmf_rdma.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:56.296 20:34:09 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:24:56.296 20:34:09 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:56.296 20:34:09 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:56.296 20:34:09 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:56.296 20:34:09 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:56.296 20:34:09 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:56.296 20:34:09 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:56.296 20:34:09 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:56.296 20:34:09 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:56.296 20:34:09 nvmf_rdma.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' rdma == rdma ']' 00:24:56.296 20:34:09 nvmf_rdma.nvmf_host_discovery -- host/discovery.sh@12 -- # echo 'Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target.' 00:24:56.296 Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target. 00:24:56.296 20:34:09 nvmf_rdma.nvmf_host_discovery -- host/discovery.sh@13 -- # exit 0 00:24:56.296 00:24:56.296 real 0m0.110s 00:24:56.296 user 0m0.055s 00:24:56.296 sys 0m0.063s 00:24:56.296 20:34:09 nvmf_rdma.nvmf_host_discovery -- common/autotest_common.sh@1122 -- # xtrace_disable 00:24:56.296 20:34:09 nvmf_rdma.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:56.296 ************************************ 00:24:56.296 END TEST nvmf_host_discovery 00:24:56.296 ************************************ 00:24:56.296 20:34:09 nvmf_rdma -- nvmf/nvmf.sh@101 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=rdma 00:24:56.296 20:34:09 nvmf_rdma -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:24:56.296 20:34:09 nvmf_rdma -- common/autotest_common.sh@1103 -- # xtrace_disable 00:24:56.296 20:34:09 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:24:56.296 ************************************ 00:24:56.296 START TEST nvmf_host_multipath_status 00:24:56.296 ************************************ 00:24:56.296 20:34:09 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=rdma 00:24:56.296 * Looking for test storage... 00:24:56.296 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:24:56.296 20:34:09 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:24:56.296 20:34:09 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:24:56.296 20:34:09 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:56.296 20:34:09 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:56.296 20:34:09 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:56.296 20:34:09 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:56.296 20:34:09 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:56.296 20:34:09 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:56.296 20:34:09 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:56.296 20:34:09 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:56.296 20:34:09 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:56.296 20:34:09 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:56.296 20:34:09 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:24:56.296 20:34:09 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:24:56.296 20:34:09 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:56.296 20:34:09 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:56.296 20:34:09 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:56.552 20:34:09 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:56.552 20:34:09 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:24:56.552 20:34:09 nvmf_rdma.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:56.552 20:34:09 nvmf_rdma.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:56.552 20:34:09 nvmf_rdma.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:56.552 20:34:09 nvmf_rdma.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:56.552 20:34:09 nvmf_rdma.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:56.552 20:34:09 nvmf_rdma.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:56.552 20:34:09 nvmf_rdma.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:24:56.552 20:34:09 nvmf_rdma.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:56.552 20:34:09 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:24:56.552 20:34:09 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:56.552 20:34:09 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:56.552 20:34:09 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:56.552 20:34:09 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:56.552 20:34:09 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:56.552 20:34:09 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:56.552 20:34:09 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:56.552 20:34:09 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:56.553 20:34:09 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:24:56.553 20:34:09 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:24:56.553 20:34:09 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:24:56.553 20:34:09 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/bpftrace.sh 00:24:56.553 20:34:09 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:56.553 20:34:09 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:24:56.553 20:34:09 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:24:56.553 20:34:09 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:24:56.553 20:34:09 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:56.553 20:34:09 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:56.553 20:34:09 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:56.553 20:34:09 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:56.553 20:34:09 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:56.553 20:34:09 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:56.553 20:34:09 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:56.553 20:34:09 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:56.553 20:34:09 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:56.553 20:34:09 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@285 -- # xtrace_disable 00:24:56.553 20:34:09 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:03.111 20:34:15 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:03.111 20:34:15 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # pci_devs=() 00:25:03.111 20:34:15 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:03.111 20:34:15 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:03.111 20:34:15 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:03.111 20:34:15 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:03.111 20:34:15 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:03.111 20:34:15 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # net_devs=() 00:25:03.111 20:34:15 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:03.111 20:34:15 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # e810=() 00:25:03.111 20:34:15 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # local -ga e810 00:25:03.111 20:34:15 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # x722=() 00:25:03.111 20:34:15 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # local -ga x722 00:25:03.111 20:34:15 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # mlx=() 00:25:03.111 20:34:15 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # local -ga mlx 00:25:03.111 20:34:15 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:03.111 20:34:15 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:03.111 20:34:15 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:03.111 20:34:15 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:03.111 20:34:15 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:03.111 20:34:15 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:03.111 20:34:15 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:03.111 20:34:15 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:03.111 20:34:15 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:03.111 20:34:15 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:03.111 20:34:15 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:03.111 20:34:15 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:03.111 20:34:15 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:25:03.111 20:34:15 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:25:03.111 20:34:15 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:25:03.111 20:34:15 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:25:03.111 20:34:15 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:25:03.111 20:34:15 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:03.111 20:34:15 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:03.111 20:34:15 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:25:03.111 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:25:03.111 20:34:15 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:25:03.111 20:34:15 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:25:03.111 20:34:15 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:25:03.111 20:34:15 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:25:03.111 20:34:15 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:25:03.111 20:34:15 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:25:03.111 20:34:15 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:03.111 20:34:15 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:25:03.111 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:25:03.111 20:34:15 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:25:03.111 20:34:15 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:25:03.111 20:34:15 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:25:03.111 20:34:15 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:25:03.111 20:34:15 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:25:03.111 20:34:15 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:25:03.111 20:34:15 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:03.111 20:34:15 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:25:03.111 20:34:15 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:03.111 20:34:15 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:03.111 20:34:15 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:25:03.111 20:34:15 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:03.111 20:34:15 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:03.111 20:34:15 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:25:03.111 Found net devices under 0000:da:00.0: mlx_0_0 00:25:03.111 20:34:15 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:03.111 20:34:15 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:03.111 20:34:15 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:03.111 20:34:15 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:25:03.111 20:34:15 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:03.111 20:34:15 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:03.111 20:34:15 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:25:03.111 Found net devices under 0000:da:00.1: mlx_0_1 00:25:03.111 20:34:15 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:03.111 20:34:15 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:03.111 20:34:15 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # is_hw=yes 00:25:03.111 20:34:15 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:03.111 20:34:15 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:25:03.111 20:34:15 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:25:03.111 20:34:15 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@420 -- # rdma_device_init 00:25:03.112 20:34:15 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:25:03.112 20:34:15 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@58 -- # uname 00:25:03.112 20:34:15 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:25:03.112 20:34:15 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@62 -- # modprobe ib_cm 00:25:03.112 20:34:15 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@63 -- # modprobe ib_core 00:25:03.112 20:34:15 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@64 -- # modprobe ib_umad 00:25:03.112 20:34:15 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:25:03.112 20:34:15 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@66 -- # modprobe iw_cm 00:25:03.112 20:34:15 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:25:03.112 20:34:15 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:25:03.112 20:34:15 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # allocate_nic_ips 00:25:03.112 20:34:15 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:25:03.112 20:34:15 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@73 -- # get_rdma_if_list 00:25:03.112 20:34:15 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:25:03.112 20:34:15 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:25:03.112 20:34:15 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:25:03.112 20:34:15 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:25:03.112 20:34:15 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:25:03.112 20:34:15 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:25:03.112 20:34:15 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:03.112 20:34:15 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:25:03.112 20:34:15 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@104 -- # echo mlx_0_0 00:25:03.112 20:34:15 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@105 -- # continue 2 00:25:03.112 20:34:15 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:25:03.112 20:34:15 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:03.112 20:34:15 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:25:03.112 20:34:15 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:03.112 20:34:15 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:25:03.112 20:34:15 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@104 -- # echo mlx_0_1 00:25:03.112 20:34:15 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@105 -- # continue 2 00:25:03.112 20:34:15 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:25:03.112 20:34:15 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:25:03.112 20:34:15 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:25:03.112 20:34:15 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:25:03.112 20:34:15 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@113 -- # awk '{print $4}' 00:25:03.112 20:34:15 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@113 -- # cut -d/ -f1 00:25:03.112 20:34:15 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:25:03.112 20:34:15 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:25:03.112 20:34:15 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:25:03.112 262: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:25:03.112 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:25:03.112 altname enp218s0f0np0 00:25:03.112 altname ens818f0np0 00:25:03.112 inet 192.168.100.8/24 scope global mlx_0_0 00:25:03.112 valid_lft forever preferred_lft forever 00:25:03.112 20:34:15 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:25:03.112 20:34:15 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:25:03.112 20:34:15 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:25:03.112 20:34:15 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:25:03.112 20:34:15 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@113 -- # awk '{print $4}' 00:25:03.112 20:34:15 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@113 -- # cut -d/ -f1 00:25:03.112 20:34:15 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:25:03.112 20:34:15 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:25:03.112 20:34:15 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:25:03.112 263: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:25:03.112 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:25:03.112 altname enp218s0f1np1 00:25:03.112 altname ens818f1np1 00:25:03.112 inet 192.168.100.9/24 scope global mlx_0_1 00:25:03.112 valid_lft forever preferred_lft forever 00:25:03.112 20:34:15 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # return 0 00:25:03.112 20:34:15 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:03.112 20:34:15 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:25:03.112 20:34:15 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:25:03.112 20:34:15 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:25:03.112 20:34:15 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@86 -- # get_rdma_if_list 00:25:03.112 20:34:15 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:25:03.112 20:34:15 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:25:03.112 20:34:15 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:25:03.112 20:34:15 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:25:03.112 20:34:15 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:25:03.112 20:34:15 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:25:03.112 20:34:15 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:03.112 20:34:15 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:25:03.112 20:34:15 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@104 -- # echo mlx_0_0 00:25:03.112 20:34:15 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@105 -- # continue 2 00:25:03.112 20:34:15 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:25:03.112 20:34:15 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:03.112 20:34:15 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:25:03.112 20:34:15 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:03.112 20:34:15 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:25:03.112 20:34:15 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@104 -- # echo mlx_0_1 00:25:03.112 20:34:15 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@105 -- # continue 2 00:25:03.112 20:34:15 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:25:03.112 20:34:15 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:25:03.112 20:34:15 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:25:03.112 20:34:15 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:25:03.112 20:34:15 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@113 -- # awk '{print $4}' 00:25:03.112 20:34:15 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@113 -- # cut -d/ -f1 00:25:03.112 20:34:15 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:25:03.112 20:34:15 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:25:03.112 20:34:15 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:25:03.112 20:34:15 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:25:03.112 20:34:15 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@113 -- # awk '{print $4}' 00:25:03.112 20:34:15 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@113 -- # cut -d/ -f1 00:25:03.112 20:34:15 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:25:03.112 192.168.100.9' 00:25:03.112 20:34:15 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:25:03.112 192.168.100.9' 00:25:03.112 20:34:15 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@457 -- # head -n 1 00:25:03.112 20:34:15 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:25:03.112 20:34:15 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@458 -- # tail -n +2 00:25:03.112 20:34:15 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@458 -- # head -n 1 00:25:03.112 20:34:15 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:25:03.112 192.168.100.9' 00:25:03.112 20:34:15 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:25:03.112 20:34:15 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:25:03.112 20:34:15 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:25:03.112 20:34:15 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:25:03.112 20:34:15 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:25:03.112 20:34:15 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:25:03.112 20:34:15 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:25:03.112 20:34:15 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:03.112 20:34:15 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@720 -- # xtrace_disable 00:25:03.112 20:34:15 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:03.112 20:34:15 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=3174639 00:25:03.112 20:34:15 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 3174639 00:25:03.112 20:34:15 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:25:03.112 20:34:15 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@827 -- # '[' -z 3174639 ']' 00:25:03.112 20:34:15 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:03.112 20:34:15 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@832 -- # local max_retries=100 00:25:03.112 20:34:15 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:03.112 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:03.112 20:34:15 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # xtrace_disable 00:25:03.113 20:34:15 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:03.113 [2024-05-16 20:34:15.470784] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:25:03.113 [2024-05-16 20:34:15.470828] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:03.113 EAL: No free 2048 kB hugepages reported on node 1 00:25:03.113 [2024-05-16 20:34:15.530664] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:25:03.113 [2024-05-16 20:34:15.608684] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:03.113 [2024-05-16 20:34:15.608719] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:03.113 [2024-05-16 20:34:15.608726] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:03.113 [2024-05-16 20:34:15.608735] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:03.113 [2024-05-16 20:34:15.608756] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:03.113 [2024-05-16 20:34:15.608798] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:03.113 [2024-05-16 20:34:15.608801] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:03.370 20:34:16 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:25:03.370 20:34:16 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # return 0 00:25:03.370 20:34:16 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:03.370 20:34:16 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:03.370 20:34:16 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:03.370 20:34:16 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:03.370 20:34:16 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=3174639 00:25:03.370 20:34:16 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:25:03.628 [2024-05-16 20:34:16.488073] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xb072f0/0xb0b7e0) succeed. 00:25:03.628 [2024-05-16 20:34:16.497093] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xb087f0/0xb4ce70) succeed. 00:25:03.628 20:34:16 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:25:03.970 Malloc0 00:25:03.970 20:34:16 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:25:03.970 20:34:16 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:04.253 20:34:17 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:25:04.510 [2024-05-16 20:34:17.270767] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:25:04.510 [2024-05-16 20:34:17.271108] rdma.c:3032:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:25:04.510 20:34:17 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:25:04.510 [2024-05-16 20:34:17.427339] rdma.c:3032:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:25:04.510 20:34:17 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:25:04.510 20:34:17 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=3175102 00:25:04.510 20:34:17 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:04.510 20:34:17 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 3175102 /var/tmp/bdevperf.sock 00:25:04.510 20:34:17 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@827 -- # '[' -z 3175102 ']' 00:25:04.510 20:34:17 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:04.510 20:34:17 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@832 -- # local max_retries=100 00:25:04.510 20:34:17 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:04.510 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:04.510 20:34:17 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # xtrace_disable 00:25:04.510 20:34:17 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:04.768 20:34:17 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:25:04.768 20:34:17 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # return 0 00:25:04.768 20:34:17 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:25:05.025 20:34:17 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:25:05.283 Nvme0n1 00:25:05.283 20:34:18 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t rdma -a 192.168.100.8 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:25:05.540 Nvme0n1 00:25:05.540 20:34:18 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:25:05.540 20:34:18 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:25:07.439 20:34:20 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:25:07.439 20:34:20 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n optimized 00:25:07.696 20:34:20 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n optimized 00:25:07.696 20:34:20 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:25:09.069 20:34:21 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:25:09.069 20:34:21 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:09.069 20:34:21 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:09.069 20:34:21 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:09.069 20:34:21 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:09.069 20:34:21 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:09.069 20:34:21 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:09.069 20:34:21 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:09.069 20:34:22 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:09.069 20:34:22 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:09.069 20:34:22 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:09.069 20:34:22 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:09.327 20:34:22 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:09.327 20:34:22 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:09.327 20:34:22 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:09.327 20:34:22 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:09.587 20:34:22 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:09.587 20:34:22 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:09.587 20:34:22 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:09.587 20:34:22 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:09.587 20:34:22 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:09.587 20:34:22 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:09.587 20:34:22 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:09.587 20:34:22 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:09.846 20:34:22 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:09.846 20:34:22 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:25:09.846 20:34:22 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:25:10.104 20:34:22 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n optimized 00:25:10.361 20:34:23 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:25:11.299 20:34:24 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:25:11.299 20:34:24 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:25:11.299 20:34:24 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:11.299 20:34:24 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:11.299 20:34:24 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:11.299 20:34:24 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:11.299 20:34:24 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:11.299 20:34:24 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:11.557 20:34:24 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:11.557 20:34:24 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:11.557 20:34:24 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:11.557 20:34:24 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:11.815 20:34:24 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:11.815 20:34:24 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:11.815 20:34:24 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:11.815 20:34:24 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:12.072 20:34:24 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:12.072 20:34:24 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:12.072 20:34:24 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:12.072 20:34:24 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:12.072 20:34:24 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:12.072 20:34:24 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:12.072 20:34:24 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:12.072 20:34:24 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:12.330 20:34:25 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:12.330 20:34:25 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:25:12.330 20:34:25 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:25:12.587 20:34:25 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n non_optimized 00:25:12.587 20:34:25 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:25:13.959 20:34:26 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:25:13.959 20:34:26 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:13.959 20:34:26 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:13.959 20:34:26 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:13.959 20:34:26 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:13.959 20:34:26 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:13.959 20:34:26 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:13.959 20:34:26 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:13.959 20:34:26 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:13.959 20:34:26 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:13.959 20:34:26 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:13.959 20:34:26 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:14.217 20:34:27 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:14.217 20:34:27 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:14.217 20:34:27 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:14.217 20:34:27 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:14.476 20:34:27 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:14.476 20:34:27 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:14.476 20:34:27 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:14.476 20:34:27 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:14.476 20:34:27 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:14.476 20:34:27 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:14.476 20:34:27 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:14.476 20:34:27 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:14.734 20:34:27 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:14.734 20:34:27 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:25:14.734 20:34:27 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:25:14.992 20:34:27 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n inaccessible 00:25:14.992 20:34:27 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:25:16.367 20:34:28 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:25:16.367 20:34:28 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:16.367 20:34:28 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:16.367 20:34:28 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:16.367 20:34:29 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:16.367 20:34:29 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:16.367 20:34:29 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:16.367 20:34:29 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:16.367 20:34:29 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:16.367 20:34:29 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:16.367 20:34:29 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:16.367 20:34:29 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:16.625 20:34:29 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:16.625 20:34:29 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:16.625 20:34:29 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:16.625 20:34:29 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:16.884 20:34:29 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:16.884 20:34:29 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:16.884 20:34:29 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:16.884 20:34:29 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:16.884 20:34:29 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:16.884 20:34:29 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:25:16.884 20:34:29 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:16.884 20:34:29 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:17.142 20:34:29 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:17.142 20:34:29 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:25:17.142 20:34:29 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n inaccessible 00:25:17.400 20:34:30 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n inaccessible 00:25:17.400 20:34:30 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:25:18.775 20:34:31 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:25:18.775 20:34:31 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:25:18.775 20:34:31 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:18.776 20:34:31 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:18.776 20:34:31 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:18.776 20:34:31 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:18.776 20:34:31 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:18.776 20:34:31 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:18.776 20:34:31 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:18.776 20:34:31 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:18.776 20:34:31 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:18.776 20:34:31 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:19.034 20:34:31 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:19.034 20:34:31 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:19.034 20:34:31 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:19.034 20:34:31 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:19.293 20:34:32 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:19.293 20:34:32 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:25:19.293 20:34:32 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:19.293 20:34:32 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:19.293 20:34:32 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:19.293 20:34:32 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:25:19.293 20:34:32 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:19.293 20:34:32 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:19.551 20:34:32 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:19.551 20:34:32 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:25:19.551 20:34:32 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n inaccessible 00:25:19.810 20:34:32 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n optimized 00:25:19.810 20:34:32 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:25:21.184 20:34:33 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:25:21.184 20:34:33 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:25:21.184 20:34:33 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:21.184 20:34:33 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:21.184 20:34:33 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:21.184 20:34:33 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:21.184 20:34:33 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:21.184 20:34:33 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:21.184 20:34:34 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:21.184 20:34:34 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:21.184 20:34:34 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:21.184 20:34:34 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:21.443 20:34:34 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:21.443 20:34:34 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:21.443 20:34:34 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:21.443 20:34:34 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:21.702 20:34:34 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:21.702 20:34:34 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:25:21.702 20:34:34 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:21.702 20:34:34 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:21.702 20:34:34 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:21.702 20:34:34 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:21.702 20:34:34 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:21.702 20:34:34 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:21.961 20:34:34 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:21.961 20:34:34 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:25:22.219 20:34:35 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:25:22.219 20:34:35 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n optimized 00:25:22.219 20:34:35 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n optimized 00:25:22.477 20:34:35 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:25:23.413 20:34:36 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:25:23.413 20:34:36 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:23.413 20:34:36 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:23.413 20:34:36 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:23.672 20:34:36 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:23.672 20:34:36 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:23.672 20:34:36 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:23.672 20:34:36 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:23.931 20:34:36 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:23.931 20:34:36 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:23.931 20:34:36 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:23.931 20:34:36 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:24.190 20:34:36 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:24.190 20:34:36 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:24.190 20:34:36 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:24.190 20:34:36 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:24.190 20:34:37 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:24.190 20:34:37 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:24.190 20:34:37 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:24.190 20:34:37 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:24.449 20:34:37 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:24.449 20:34:37 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:24.449 20:34:37 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:24.449 20:34:37 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:24.707 20:34:37 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:24.707 20:34:37 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:25:24.708 20:34:37 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:25:24.708 20:34:37 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n optimized 00:25:24.966 20:34:37 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:25:25.899 20:34:38 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:25:25.899 20:34:38 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:25:25.899 20:34:38 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:25.899 20:34:38 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:26.157 20:34:39 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:26.157 20:34:39 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:26.157 20:34:39 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:26.157 20:34:39 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:26.415 20:34:39 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:26.415 20:34:39 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:26.415 20:34:39 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:26.415 20:34:39 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:26.415 20:34:39 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:26.415 20:34:39 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:26.415 20:34:39 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:26.415 20:34:39 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:26.674 20:34:39 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:26.674 20:34:39 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:26.674 20:34:39 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:26.674 20:34:39 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:26.932 20:34:39 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:26.932 20:34:39 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:26.932 20:34:39 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:26.932 20:34:39 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:26.932 20:34:39 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:26.932 20:34:39 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:25:26.932 20:34:39 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:25:27.189 20:34:40 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n non_optimized 00:25:27.447 20:34:40 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:25:28.382 20:34:41 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:25:28.382 20:34:41 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:28.382 20:34:41 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:28.382 20:34:41 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:28.640 20:34:41 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:28.640 20:34:41 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:28.640 20:34:41 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:28.640 20:34:41 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:28.640 20:34:41 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:28.640 20:34:41 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:28.640 20:34:41 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:28.640 20:34:41 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:28.898 20:34:41 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:28.898 20:34:41 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:28.898 20:34:41 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:28.898 20:34:41 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:29.156 20:34:41 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:29.156 20:34:41 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:29.156 20:34:41 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:29.156 20:34:41 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:29.156 20:34:42 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:29.156 20:34:42 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:29.156 20:34:42 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:29.156 20:34:42 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:29.414 20:34:42 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:29.415 20:34:42 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:25:29.415 20:34:42 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:25:29.673 20:34:42 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n inaccessible 00:25:29.673 20:34:42 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:25:31.046 20:34:43 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:25:31.046 20:34:43 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:31.046 20:34:43 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:31.046 20:34:43 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:31.046 20:34:43 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:31.046 20:34:43 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:31.046 20:34:43 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:31.046 20:34:43 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:31.046 20:34:44 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:31.046 20:34:44 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:31.046 20:34:44 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:31.046 20:34:44 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:31.304 20:34:44 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:31.304 20:34:44 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:31.304 20:34:44 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:31.305 20:34:44 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:31.562 20:34:44 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:31.562 20:34:44 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:31.562 20:34:44 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:31.562 20:34:44 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:31.562 20:34:44 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:31.562 20:34:44 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:25:31.562 20:34:44 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:31.562 20:34:44 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:31.820 20:34:44 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:31.820 20:34:44 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 3175102 00:25:31.820 20:34:44 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@946 -- # '[' -z 3175102 ']' 00:25:31.820 20:34:44 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # kill -0 3175102 00:25:31.820 20:34:44 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@951 -- # uname 00:25:31.821 20:34:44 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:25:31.821 20:34:44 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3175102 00:25:31.821 20:34:44 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:25:31.821 20:34:44 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:25:31.821 20:34:44 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3175102' 00:25:31.821 killing process with pid 3175102 00:25:31.821 20:34:44 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@965 -- # kill 3175102 00:25:31.821 20:34:44 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@970 -- # wait 3175102 00:25:32.085 Connection closed with partial response: 00:25:32.085 00:25:32.085 00:25:32.085 20:34:44 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 3175102 00:25:32.085 20:34:44 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:32.085 [2024-05-16 20:34:17.468544] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:25:32.085 [2024-05-16 20:34:17.468596] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3175102 ] 00:25:32.085 EAL: No free 2048 kB hugepages reported on node 1 00:25:32.085 [2024-05-16 20:34:17.523321] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:32.085 [2024-05-16 20:34:17.596351] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:32.085 Running I/O for 90 seconds... 00:25:32.085 [2024-05-16 20:34:30.156963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:122216 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a4000 len:0x1000 key:0x3b200 00:25:32.085 [2024-05-16 20:34:30.157004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:32.085 [2024-05-16 20:34:30.157040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:122224 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a2000 len:0x1000 key:0x3b200 00:25:32.085 [2024-05-16 20:34:30.157049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:32.085 [2024-05-16 20:34:30.157060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:122232 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a0000 len:0x1000 key:0x3b200 00:25:32.085 [2024-05-16 20:34:30.157067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:32.085 [2024-05-16 20:34:30.157077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:122240 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759e000 len:0x1000 key:0x3b200 00:25:32.085 [2024-05-16 20:34:30.157084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:32.085 [2024-05-16 20:34:30.157093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:122248 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759c000 len:0x1000 key:0x3b200 00:25:32.085 [2024-05-16 20:34:30.157100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:32.085 [2024-05-16 20:34:30.157109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:122256 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759a000 len:0x1000 key:0x3b200 00:25:32.085 [2024-05-16 20:34:30.157116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:32.085 [2024-05-16 20:34:30.157125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:122264 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007598000 len:0x1000 key:0x3b200 00:25:32.085 [2024-05-16 20:34:30.157132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:32.085 [2024-05-16 20:34:30.157141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:122272 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007596000 len:0x1000 key:0x3b200 00:25:32.085 [2024-05-16 20:34:30.157147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:32.085 [2024-05-16 20:34:30.157156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:122280 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007594000 len:0x1000 key:0x3b200 00:25:32.085 [2024-05-16 20:34:30.157163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:32.085 [2024-05-16 20:34:30.157172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:122288 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007592000 len:0x1000 key:0x3b200 00:25:32.085 [2024-05-16 20:34:30.157184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:32.085 [2024-05-16 20:34:30.157194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:122296 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007590000 len:0x1000 key:0x3b200 00:25:32.085 [2024-05-16 20:34:30.157200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:32.085 [2024-05-16 20:34:30.157210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:122304 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758e000 len:0x1000 key:0x3b200 00:25:32.085 [2024-05-16 20:34:30.157216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:32.085 [2024-05-16 20:34:30.157226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:122312 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758c000 len:0x1000 key:0x3b200 00:25:32.085 [2024-05-16 20:34:30.157233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:32.085 [2024-05-16 20:34:30.157242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:122320 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758a000 len:0x1000 key:0x3b200 00:25:32.085 [2024-05-16 20:34:30.157249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:32.085 [2024-05-16 20:34:30.157258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:122328 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007588000 len:0x1000 key:0x3b200 00:25:32.085 [2024-05-16 20:34:30.157264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:32.085 [2024-05-16 20:34:30.157273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:122336 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007586000 len:0x1000 key:0x3b200 00:25:32.085 [2024-05-16 20:34:30.157280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:32.085 [2024-05-16 20:34:30.157289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:122344 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007584000 len:0x1000 key:0x3b200 00:25:32.085 [2024-05-16 20:34:30.157296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.085 [2024-05-16 20:34:30.157306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:122352 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007582000 len:0x1000 key:0x3b200 00:25:32.085 [2024-05-16 20:34:30.157313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:32.085 [2024-05-16 20:34:30.157322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:122360 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007580000 len:0x1000 key:0x3b200 00:25:32.085 [2024-05-16 20:34:30.157328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:32.085 [2024-05-16 20:34:30.157337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:122368 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757e000 len:0x1000 key:0x3b200 00:25:32.085 [2024-05-16 20:34:30.157344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:32.085 [2024-05-16 20:34:30.157354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:122376 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757c000 len:0x1000 key:0x3b200 00:25:32.085 [2024-05-16 20:34:30.157361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:32.085 [2024-05-16 20:34:30.157372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:122384 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757a000 len:0x1000 key:0x3b200 00:25:32.085 [2024-05-16 20:34:30.157379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:32.085 [2024-05-16 20:34:30.157388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:122392 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007578000 len:0x1000 key:0x3b200 00:25:32.085 [2024-05-16 20:34:30.157395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:32.085 [2024-05-16 20:34:30.157404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:122400 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007576000 len:0x1000 key:0x3b200 00:25:32.085 [2024-05-16 20:34:30.157410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:32.086 [2024-05-16 20:34:30.157420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:122408 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007574000 len:0x1000 key:0x3b200 00:25:32.086 [2024-05-16 20:34:30.157431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:32.086 [2024-05-16 20:34:30.157440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:122416 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007572000 len:0x1000 key:0x3b200 00:25:32.086 [2024-05-16 20:34:30.157447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:32.086 [2024-05-16 20:34:30.157456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:122424 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007570000 len:0x1000 key:0x3b200 00:25:32.086 [2024-05-16 20:34:30.157462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:32.086 [2024-05-16 20:34:30.157471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:122432 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756e000 len:0x1000 key:0x3b200 00:25:32.086 [2024-05-16 20:34:30.157478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:32.086 [2024-05-16 20:34:30.157487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:122440 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756c000 len:0x1000 key:0x3b200 00:25:32.086 [2024-05-16 20:34:30.157493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:32.086 [2024-05-16 20:34:30.157502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:122448 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756a000 len:0x1000 key:0x3b200 00:25:32.086 [2024-05-16 20:34:30.157509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:32.086 [2024-05-16 20:34:30.157518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:122456 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007568000 len:0x1000 key:0x3b200 00:25:32.086 [2024-05-16 20:34:30.157525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:32.086 [2024-05-16 20:34:30.157534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:122464 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007566000 len:0x1000 key:0x3b200 00:25:32.086 [2024-05-16 20:34:30.157540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:32.086 [2024-05-16 20:34:30.157553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:122472 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007564000 len:0x1000 key:0x3b200 00:25:32.086 [2024-05-16 20:34:30.157560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:32.086 [2024-05-16 20:34:30.157571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:122480 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007562000 len:0x1000 key:0x3b200 00:25:32.086 [2024-05-16 20:34:30.157578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:32.086 [2024-05-16 20:34:30.157587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:122488 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007560000 len:0x1000 key:0x3b200 00:25:32.086 [2024-05-16 20:34:30.157593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:32.086 [2024-05-16 20:34:30.157603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:122496 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755e000 len:0x1000 key:0x3b200 00:25:32.086 [2024-05-16 20:34:30.157609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:32.086 [2024-05-16 20:34:30.157618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:122504 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755c000 len:0x1000 key:0x3b200 00:25:32.086 [2024-05-16 20:34:30.157625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:32.086 [2024-05-16 20:34:30.157634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:122512 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755a000 len:0x1000 key:0x3b200 00:25:32.086 [2024-05-16 20:34:30.157640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:32.086 [2024-05-16 20:34:30.157649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:122520 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007558000 len:0x1000 key:0x3b200 00:25:32.086 [2024-05-16 20:34:30.157656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:32.086 [2024-05-16 20:34:30.157665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:122528 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007556000 len:0x1000 key:0x3b200 00:25:32.086 [2024-05-16 20:34:30.157671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:32.086 [2024-05-16 20:34:30.157680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:122536 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007554000 len:0x1000 key:0x3b200 00:25:32.086 [2024-05-16 20:34:30.157687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:32.086 [2024-05-16 20:34:30.157696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:122544 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007552000 len:0x1000 key:0x3b200 00:25:32.086 [2024-05-16 20:34:30.157702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:32.086 [2024-05-16 20:34:30.157712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:122552 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007550000 len:0x1000 key:0x3b200 00:25:32.086 [2024-05-16 20:34:30.157718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:32.086 [2024-05-16 20:34:30.157727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:122560 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750e000 len:0x1000 key:0x3b200 00:25:32.086 [2024-05-16 20:34:30.157735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:32.086 [2024-05-16 20:34:30.157744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:122568 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750c000 len:0x1000 key:0x3b200 00:25:32.086 [2024-05-16 20:34:30.157751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:32.086 [2024-05-16 20:34:30.157760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:122576 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750a000 len:0x1000 key:0x3b200 00:25:32.086 [2024-05-16 20:34:30.157766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:32.086 [2024-05-16 20:34:30.157775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:122584 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b6000 len:0x1000 key:0x3b200 00:25:32.086 [2024-05-16 20:34:30.157782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:32.086 [2024-05-16 20:34:30.157791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:122592 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b8000 len:0x1000 key:0x3b200 00:25:32.086 [2024-05-16 20:34:30.157797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:32.086 [2024-05-16 20:34:30.157806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:122880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.086 [2024-05-16 20:34:30.157812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:32.086 [2024-05-16 20:34:30.157821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:122888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.086 [2024-05-16 20:34:30.157827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:32.086 [2024-05-16 20:34:30.157837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:122896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.086 [2024-05-16 20:34:30.157843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:32.086 [2024-05-16 20:34:30.157852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:122904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.086 [2024-05-16 20:34:30.157858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:32.086 [2024-05-16 20:34:30.157867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:122912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.086 [2024-05-16 20:34:30.157874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:32.086 [2024-05-16 20:34:30.157883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:122600 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754e000 len:0x1000 key:0x3b200 00:25:32.086 [2024-05-16 20:34:30.157889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:32.086 [2024-05-16 20:34:30.157898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:122608 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754c000 len:0x1000 key:0x3b200 00:25:32.086 [2024-05-16 20:34:30.157905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:32.086 [2024-05-16 20:34:30.157915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:122616 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754a000 len:0x1000 key:0x3b200 00:25:32.086 [2024-05-16 20:34:30.157921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:32.086 [2024-05-16 20:34:30.157930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:122624 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007502000 len:0x1000 key:0x3b200 00:25:32.086 [2024-05-16 20:34:30.157937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:32.086 [2024-05-16 20:34:30.157946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:122632 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007500000 len:0x1000 key:0x3b200 00:25:32.086 [2024-05-16 20:34:30.157952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:32.087 [2024-05-16 20:34:30.157961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:122640 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007544000 len:0x1000 key:0x3b200 00:25:32.087 [2024-05-16 20:34:30.157967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:32.087 [2024-05-16 20:34:30.157977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:122648 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007542000 len:0x1000 key:0x3b200 00:25:32.087 [2024-05-16 20:34:30.157983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:32.087 [2024-05-16 20:34:30.157993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:122656 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007540000 len:0x1000 key:0x3b200 00:25:32.087 [2024-05-16 20:34:30.157999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:32.087 [2024-05-16 20:34:30.158008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:122664 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753e000 len:0x1000 key:0x3b200 00:25:32.087 [2024-05-16 20:34:30.158015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:32.087 [2024-05-16 20:34:30.158024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:122672 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753c000 len:0x1000 key:0x3b200 00:25:32.087 [2024-05-16 20:34:30.158030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:32.087 [2024-05-16 20:34:30.158039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:122680 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753a000 len:0x1000 key:0x3b200 00:25:32.087 [2024-05-16 20:34:30.158046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:32.087 [2024-05-16 20:34:30.158055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:122688 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007538000 len:0x1000 key:0x3b200 00:25:32.087 [2024-05-16 20:34:30.158062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:32.087 [2024-05-16 20:34:30.158071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:122696 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007536000 len:0x1000 key:0x3b200 00:25:32.087 [2024-05-16 20:34:30.158077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:32.087 [2024-05-16 20:34:30.158348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:122704 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007534000 len:0x1000 key:0x3b200 00:25:32.087 [2024-05-16 20:34:30.158360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:32.087 [2024-05-16 20:34:30.158375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:122920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.087 [2024-05-16 20:34:30.158382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:32.087 [2024-05-16 20:34:30.158398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:122928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.087 [2024-05-16 20:34:30.158404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:32.087 [2024-05-16 20:34:30.158417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:122936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.087 [2024-05-16 20:34:30.158429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:32.087 [2024-05-16 20:34:30.158442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:122944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.087 [2024-05-16 20:34:30.158449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:32.087 [2024-05-16 20:34:30.158462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:122952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.087 [2024-05-16 20:34:30.158469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:32.087 [2024-05-16 20:34:30.158482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:122960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.087 [2024-05-16 20:34:30.158488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:32.087 [2024-05-16 20:34:30.158501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:122968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.087 [2024-05-16 20:34:30.158508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:32.087 [2024-05-16 20:34:30.158521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:122976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.087 [2024-05-16 20:34:30.158527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:32.087 [2024-05-16 20:34:30.158540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:122984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.087 [2024-05-16 20:34:30.158547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:32.087 [2024-05-16 20:34:30.158560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:122992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.087 [2024-05-16 20:34:30.158566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:32.087 [2024-05-16 20:34:30.158579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:123000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.087 [2024-05-16 20:34:30.158585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:32.087 [2024-05-16 20:34:30.158598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:123008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.087 [2024-05-16 20:34:30.158606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:32.087 [2024-05-16 20:34:30.158620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:123016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.087 [2024-05-16 20:34:30.158626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:32.087 [2024-05-16 20:34:30.158639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:123024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.087 [2024-05-16 20:34:30.158646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:32.087 [2024-05-16 20:34:30.158658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:123032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.087 [2024-05-16 20:34:30.158665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:32.087 [2024-05-16 20:34:30.158678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:123040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.087 [2024-05-16 20:34:30.158685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:32.087 [2024-05-16 20:34:30.158698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:123048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.087 [2024-05-16 20:34:30.158705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:32.087 [2024-05-16 20:34:30.158718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:123056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.087 [2024-05-16 20:34:30.158725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:32.087 [2024-05-16 20:34:30.158738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:123064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.087 [2024-05-16 20:34:30.158744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:32.087 [2024-05-16 20:34:30.158757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:123072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.087 [2024-05-16 20:34:30.158763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:32.087 [2024-05-16 20:34:30.158776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:123080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.087 [2024-05-16 20:34:30.158783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:32.087 [2024-05-16 20:34:30.158796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:123088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.087 [2024-05-16 20:34:30.158802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:32.087 [2024-05-16 20:34:30.158815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:123096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.087 [2024-05-16 20:34:30.158821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:32.087 [2024-05-16 20:34:30.158835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:123104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.087 [2024-05-16 20:34:30.158844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:32.087 [2024-05-16 20:34:30.158857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:123112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.087 [2024-05-16 20:34:30.158863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:32.087 [2024-05-16 20:34:30.158877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:123120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.087 [2024-05-16 20:34:30.158883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:32.087 [2024-05-16 20:34:30.158896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:123128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.087 [2024-05-16 20:34:30.158903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:32.087 [2024-05-16 20:34:30.158916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:123136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.087 [2024-05-16 20:34:30.158922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:32.087 [2024-05-16 20:34:30.159231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:123144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.088 [2024-05-16 20:34:30.159239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:32.088 [2024-05-16 20:34:30.159255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:123152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.088 [2024-05-16 20:34:30.159262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:32.088 [2024-05-16 20:34:30.159277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:123160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.088 [2024-05-16 20:34:30.159284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:32.088 [2024-05-16 20:34:30.159298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:123168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.088 [2024-05-16 20:34:30.159305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:32.088 [2024-05-16 20:34:30.159320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:122712 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007532000 len:0x1000 key:0x3b200 00:25:32.088 [2024-05-16 20:34:30.159326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:32.088 [2024-05-16 20:34:30.159341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:122720 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007530000 len:0x1000 key:0x3b200 00:25:32.088 [2024-05-16 20:34:30.159347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:32.088 [2024-05-16 20:34:30.159362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:122728 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752e000 len:0x1000 key:0x3b200 00:25:32.088 [2024-05-16 20:34:30.159369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:32.088 [2024-05-16 20:34:30.159384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:122736 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752c000 len:0x1000 key:0x3b200 00:25:32.088 [2024-05-16 20:34:30.159393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:32.088 [2024-05-16 20:34:30.159408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:122744 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752a000 len:0x1000 key:0x3b200 00:25:32.088 [2024-05-16 20:34:30.159414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:32.088 [2024-05-16 20:34:30.159433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:122752 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007528000 len:0x1000 key:0x3b200 00:25:32.088 [2024-05-16 20:34:30.159443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:32.088 [2024-05-16 20:34:30.159458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:122760 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007526000 len:0x1000 key:0x3b200 00:25:32.088 [2024-05-16 20:34:30.159464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:32.088 [2024-05-16 20:34:30.159480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:122768 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007524000 len:0x1000 key:0x3b200 00:25:32.088 [2024-05-16 20:34:30.159486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:32.088 [2024-05-16 20:34:30.159501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:122776 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007522000 len:0x1000 key:0x3b200 00:25:32.088 [2024-05-16 20:34:30.159507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:32.088 [2024-05-16 20:34:30.159522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:122784 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007520000 len:0x1000 key:0x3b200 00:25:32.088 [2024-05-16 20:34:30.159529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:32.088 [2024-05-16 20:34:30.159543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:122792 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751e000 len:0x1000 key:0x3b200 00:25:32.088 [2024-05-16 20:34:30.159550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:32.088 [2024-05-16 20:34:30.159565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:122800 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751c000 len:0x1000 key:0x3b200 00:25:32.088 [2024-05-16 20:34:30.159571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:32.088 [2024-05-16 20:34:30.159586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:122808 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751a000 len:0x1000 key:0x3b200 00:25:32.088 [2024-05-16 20:34:30.159592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:32.088 [2024-05-16 20:34:30.159607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:122816 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007518000 len:0x1000 key:0x3b200 00:25:32.088 [2024-05-16 20:34:30.159614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:32.088 [2024-05-16 20:34:30.159629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:122824 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007516000 len:0x1000 key:0x3b200 00:25:32.088 [2024-05-16 20:34:30.159635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:32.088 [2024-05-16 20:34:30.159653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:122832 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007514000 len:0x1000 key:0x3b200 00:25:32.088 [2024-05-16 20:34:30.159659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:32.088 [2024-05-16 20:34:30.159675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:122840 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007512000 len:0x1000 key:0x3b200 00:25:32.088 [2024-05-16 20:34:30.159681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:32.088 [2024-05-16 20:34:30.159696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:122848 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007510000 len:0x1000 key:0x3b200 00:25:32.088 [2024-05-16 20:34:30.159702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:32.088 [2024-05-16 20:34:30.159717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:122856 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b0000 len:0x1000 key:0x3b200 00:25:32.088 [2024-05-16 20:34:30.159724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:32.088 [2024-05-16 20:34:30.159738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:122864 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b2000 len:0x1000 key:0x3b200 00:25:32.088 [2024-05-16 20:34:30.159745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:32.088 [2024-05-16 20:34:30.159760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:122872 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b4000 len:0x1000 key:0x3b200 00:25:32.088 [2024-05-16 20:34:30.159766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:32.088 [2024-05-16 20:34:30.159782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:123176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.088 [2024-05-16 20:34:30.159789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:32.088 [2024-05-16 20:34:30.159804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:123184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.088 [2024-05-16 20:34:30.159810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:32.088 [2024-05-16 20:34:30.159828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:123192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.088 [2024-05-16 20:34:30.159835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:32.088 [2024-05-16 20:34:30.159850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:123200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.088 [2024-05-16 20:34:30.159856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:32.088 [2024-05-16 20:34:30.159870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:123208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.088 [2024-05-16 20:34:30.159877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:32.088 [2024-05-16 20:34:30.159892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:123216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.088 [2024-05-16 20:34:30.159900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:32.088 [2024-05-16 20:34:30.159915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:123224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.088 [2024-05-16 20:34:30.159921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:32.088 [2024-05-16 20:34:30.159936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:123232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.088 [2024-05-16 20:34:30.159942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:32.088 [2024-05-16 20:34:42.633252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:55544 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007584000 len:0x1000 key:0x3b200 00:25:32.088 [2024-05-16 20:34:42.633289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:32.088 [2024-05-16 20:34:42.633336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:55568 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c4000 len:0x1000 key:0x3b200 00:25:32.088 [2024-05-16 20:34:42.633344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:32.088 [2024-05-16 20:34:42.633354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:56040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.089 [2024-05-16 20:34:42.633361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:32.089 [2024-05-16 20:34:42.633370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:55616 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756a000 len:0x1000 key:0x3b200 00:25:32.089 [2024-05-16 20:34:42.633376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:32.089 [2024-05-16 20:34:42.633386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:55640 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007568000 len:0x1000 key:0x3b200 00:25:32.089 [2024-05-16 20:34:42.633392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:32.089 [2024-05-16 20:34:42.633401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:56056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.089 [2024-05-16 20:34:42.633408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:32.089 [2024-05-16 20:34:42.633901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:56064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.089 [2024-05-16 20:34:42.633911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:32.089 [2024-05-16 20:34:42.633921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:55696 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753e000 len:0x1000 key:0x3b200 00:25:32.089 [2024-05-16 20:34:42.633928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:32.089 [2024-05-16 20:34:42.633937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:56080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.089 [2024-05-16 20:34:42.633944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:32.089 [2024-05-16 20:34:42.633953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:56096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.089 [2024-05-16 20:34:42.633964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:32.089 [2024-05-16 20:34:42.633973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:55712 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007548000 len:0x1000 key:0x3b200 00:25:32.089 [2024-05-16 20:34:42.633979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:32.089 [2024-05-16 20:34:42.633988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:56120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.089 [2024-05-16 20:34:42.633994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:32.089 [2024-05-16 20:34:42.634004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:55728 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751c000 len:0x1000 key:0x3b200 00:25:32.089 [2024-05-16 20:34:42.634010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:32.089 [2024-05-16 20:34:42.634019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:55744 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757a000 len:0x1000 key:0x3b200 00:25:32.089 [2024-05-16 20:34:42.634026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:32.089 [2024-05-16 20:34:42.634035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:56152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.089 [2024-05-16 20:34:42.634041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:32.089 [2024-05-16 20:34:42.634050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:55760 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007506000 len:0x1000 key:0x3b200 00:25:32.089 [2024-05-16 20:34:42.634057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:32.089 [2024-05-16 20:34:42.634066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:55576 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007596000 len:0x1000 key:0x3b200 00:25:32.089 [2024-05-16 20:34:42.634072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:32.089 [2024-05-16 20:34:42.634082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:55592 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752c000 len:0x1000 key:0x3b200 00:25:32.089 [2024-05-16 20:34:42.634088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:32.089 [2024-05-16 20:34:42.634098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:55624 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007526000 len:0x1000 key:0x3b200 00:25:32.089 [2024-05-16 20:34:42.634104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:32.089 [2024-05-16 20:34:42.634113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:56176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.089 [2024-05-16 20:34:42.634119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:32.089 [2024-05-16 20:34:42.634128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:55648 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007540000 len:0x1000 key:0x3b200 00:25:32.089 [2024-05-16 20:34:42.634135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:32.089 [2024-05-16 20:34:42.634145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:55672 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007524000 len:0x1000 key:0x3b200 00:25:32.089 [2024-05-16 20:34:42.634152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:32.089 [2024-05-16 20:34:42.634161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:56192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.089 [2024-05-16 20:34:42.634167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:32.089 [2024-05-16 20:34:42.634176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:56208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.089 [2024-05-16 20:34:42.634182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:32.089 [2024-05-16 20:34:42.634192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:55704 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007582000 len:0x1000 key:0x3b200 00:25:32.089 [2024-05-16 20:34:42.634198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:32.089 [2024-05-16 20:34:42.634207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:56232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.089 [2024-05-16 20:34:42.634214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:32.089 [2024-05-16 20:34:42.634223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:56248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.089 [2024-05-16 20:34:42.634230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:32.089 [2024-05-16 20:34:42.634239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:55736 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007598000 len:0x1000 key:0x3b200 00:25:32.089 [2024-05-16 20:34:42.634245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:32.089 [2024-05-16 20:34:42.634254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:56264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.089 [2024-05-16 20:34:42.634261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:32.089 [2024-05-16 20:34:42.634269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:56280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.089 [2024-05-16 20:34:42.634276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:32.090 [2024-05-16 20:34:42.634285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:55768 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007580000 len:0x1000 key:0x3b200 00:25:32.090 [2024-05-16 20:34:42.634291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:32.090 [2024-05-16 20:34:42.634300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:56296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.090 [2024-05-16 20:34:42.634306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:32.090 [2024-05-16 20:34:42.634316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:56304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.090 [2024-05-16 20:34:42.634322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:32.090 [2024-05-16 20:34:42.634333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:55800 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757e000 len:0x1000 key:0x3b200 00:25:32.090 [2024-05-16 20:34:42.634340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:32.090 [2024-05-16 20:34:42.634349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:55808 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758e000 len:0x1000 key:0x3b200 00:25:32.090 [2024-05-16 20:34:42.634355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:32.090 [2024-05-16 20:34:42.634365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:56328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.090 [2024-05-16 20:34:42.634371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:32.090 [2024-05-16 20:34:42.634381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:55832 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a2000 len:0x1000 key:0x3b200 00:25:32.090 [2024-05-16 20:34:42.634387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:32.090 [2024-05-16 20:34:42.634396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:56344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.090 [2024-05-16 20:34:42.634402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:32.090 [2024-05-16 20:34:42.634579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:55888 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007538000 len:0x1000 key:0x3b200 00:25:32.090 [2024-05-16 20:34:42.634587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:32.090 [2024-05-16 20:34:42.634597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:56360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.090 [2024-05-16 20:34:42.634604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:32.090 [2024-05-16 20:34:42.634612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:56368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.090 [2024-05-16 20:34:42.634619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:32.090 [2024-05-16 20:34:42.634628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:56384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.090 [2024-05-16 20:34:42.634634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:32.090 [2024-05-16 20:34:42.634643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:56400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.090 [2024-05-16 20:34:42.634649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:32.090 [2024-05-16 20:34:42.634658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:55928 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754a000 len:0x1000 key:0x3b200 00:25:32.090 [2024-05-16 20:34:42.634664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:32.090 [2024-05-16 20:34:42.634673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:56408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.090 [2024-05-16 20:34:42.634684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:32.090 [2024-05-16 20:34:42.634693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:55960 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007592000 len:0x1000 key:0x3b200 00:25:32.090 [2024-05-16 20:34:42.634700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:32.090 [2024-05-16 20:34:42.634709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:55976 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007530000 len:0x1000 key:0x3b200 00:25:32.090 [2024-05-16 20:34:42.634716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:32.090 [2024-05-16 20:34:42.634725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:56008 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758a000 len:0x1000 key:0x3b200 00:25:32.090 [2024-05-16 20:34:42.634731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:32.090 [2024-05-16 20:34:42.634740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:56024 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007518000 len:0x1000 key:0x3b200 00:25:32.090 [2024-05-16 20:34:42.634747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:32.090 [2024-05-16 20:34:42.634756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:56424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.090 [2024-05-16 20:34:42.634764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:32.090 [2024-05-16 20:34:42.634773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:55824 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007588000 len:0x1000 key:0x3b200 00:25:32.090 [2024-05-16 20:34:42.634780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:32.090 [2024-05-16 20:34:42.634789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:56440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.090 [2024-05-16 20:34:42.634796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:32.090 [2024-05-16 20:34:42.634805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:55856 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755e000 len:0x1000 key:0x3b200 00:25:32.090 [2024-05-16 20:34:42.634812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:32.090 [2024-05-16 20:34:42.634821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:55880 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007572000 len:0x1000 key:0x3b200 00:25:32.090 [2024-05-16 20:34:42.634828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:32.090 [2024-05-16 20:34:42.634837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:56456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.090 [2024-05-16 20:34:42.634843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:32.090 [2024-05-16 20:34:42.634852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:56472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.090 [2024-05-16 20:34:42.634859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:32.090 [2024-05-16 20:34:42.634867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:56480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.090 [2024-05-16 20:34:42.634876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:32.090 [2024-05-16 20:34:42.634885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:56496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.090 [2024-05-16 20:34:42.634891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:32.090 [2024-05-16 20:34:42.634900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:55920 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007590000 len:0x1000 key:0x3b200 00:25:32.090 [2024-05-16 20:34:42.634906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:32.090 [2024-05-16 20:34:42.634915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:56520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.090 [2024-05-16 20:34:42.634921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:32.090 [2024-05-16 20:34:42.634930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:56528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.090 [2024-05-16 20:34:42.634937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:32.090 [2024-05-16 20:34:42.634946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:56544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.090 [2024-05-16 20:34:42.634952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:32.090 [2024-05-16 20:34:42.634961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:56000 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752e000 len:0x1000 key:0x3b200 00:25:32.090 [2024-05-16 20:34:42.634967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:32.090 [2024-05-16 20:34:42.634976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:56560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.090 [2024-05-16 20:34:42.634983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:32.090 Received shutdown signal, test time was about 26.287818 seconds 00:25:32.090 00:25:32.090 Latency(us) 00:25:32.090 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:32.090 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:25:32.090 Verification LBA range: start 0x0 length 0x4000 00:25:32.090 Nvme0n1 : 26.29 15443.49 60.33 0.00 0.00 8268.40 511.02 3019898.88 00:25:32.090 =================================================================================================================== 00:25:32.090 Total : 15443.49 60.33 0.00 0.00 8268.40 511.02 3019898.88 00:25:32.090 20:34:44 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:32.349 20:34:45 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:25:32.349 20:34:45 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:32.349 20:34:45 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:25:32.349 20:34:45 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:32.349 20:34:45 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:25:32.349 20:34:45 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:25:32.349 20:34:45 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:25:32.349 20:34:45 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:25:32.349 20:34:45 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:32.349 20:34:45 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:25:32.349 rmmod nvme_rdma 00:25:32.349 rmmod nvme_fabrics 00:25:32.349 20:34:45 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:32.349 20:34:45 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:25:32.349 20:34:45 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:25:32.349 20:34:45 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 3174639 ']' 00:25:32.349 20:34:45 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 3174639 00:25:32.349 20:34:45 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@946 -- # '[' -z 3174639 ']' 00:25:32.349 20:34:45 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # kill -0 3174639 00:25:32.349 20:34:45 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@951 -- # uname 00:25:32.349 20:34:45 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:25:32.349 20:34:45 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3174639 00:25:32.349 20:34:45 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:25:32.349 20:34:45 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:25:32.349 20:34:45 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3174639' 00:25:32.349 killing process with pid 3174639 00:25:32.349 20:34:45 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@965 -- # kill 3174639 00:25:32.349 [2024-05-16 20:34:45.242162] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:25:32.349 20:34:45 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@970 -- # wait 3174639 00:25:32.349 [2024-05-16 20:34:45.294892] rdma.c:2885:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:25:32.606 20:34:45 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:32.606 20:34:45 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:25:32.606 00:25:32.606 real 0m36.304s 00:25:32.606 user 1m43.794s 00:25:32.606 sys 0m7.959s 00:25:32.606 20:34:45 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@1122 -- # xtrace_disable 00:25:32.606 20:34:45 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:32.606 ************************************ 00:25:32.606 END TEST nvmf_host_multipath_status 00:25:32.607 ************************************ 00:25:32.607 20:34:45 nvmf_rdma -- nvmf/nvmf.sh@102 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=rdma 00:25:32.607 20:34:45 nvmf_rdma -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:25:32.607 20:34:45 nvmf_rdma -- common/autotest_common.sh@1103 -- # xtrace_disable 00:25:32.607 20:34:45 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:25:32.607 ************************************ 00:25:32.607 START TEST nvmf_discovery_remove_ifc 00:25:32.607 ************************************ 00:25:32.607 20:34:45 nvmf_rdma.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=rdma 00:25:32.866 * Looking for test storage... 00:25:32.866 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:25:32.866 20:34:45 nvmf_rdma.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:25:32.866 20:34:45 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:25:32.866 20:34:45 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:32.866 20:34:45 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:32.866 20:34:45 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:32.866 20:34:45 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:32.866 20:34:45 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:32.867 20:34:45 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:32.867 20:34:45 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:32.867 20:34:45 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:32.867 20:34:45 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:32.867 20:34:45 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:32.867 20:34:45 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:25:32.867 20:34:45 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:25:32.867 20:34:45 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:32.867 20:34:45 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:32.867 20:34:45 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:32.867 20:34:45 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:32.867 20:34:45 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:25:32.867 20:34:45 nvmf_rdma.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:32.867 20:34:45 nvmf_rdma.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:32.867 20:34:45 nvmf_rdma.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:32.867 20:34:45 nvmf_rdma.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:32.867 20:34:45 nvmf_rdma.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:32.867 20:34:45 nvmf_rdma.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:32.867 20:34:45 nvmf_rdma.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:25:32.867 20:34:45 nvmf_rdma.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:32.867 20:34:45 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:25:32.867 20:34:45 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:32.867 20:34:45 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:32.867 20:34:45 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:32.867 20:34:45 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:32.867 20:34:45 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:32.867 20:34:45 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:32.867 20:34:45 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:32.867 20:34:45 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:32.867 20:34:45 nvmf_rdma.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' rdma == rdma ']' 00:25:32.867 20:34:45 nvmf_rdma.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@15 -- # echo 'Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target.' 00:25:32.867 Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target. 00:25:32.867 20:34:45 nvmf_rdma.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@16 -- # exit 0 00:25:32.867 00:25:32.867 real 0m0.110s 00:25:32.867 user 0m0.059s 00:25:32.867 sys 0m0.059s 00:25:32.867 20:34:45 nvmf_rdma.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:25:32.867 20:34:45 nvmf_rdma.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:32.867 ************************************ 00:25:32.867 END TEST nvmf_discovery_remove_ifc 00:25:32.867 ************************************ 00:25:32.867 20:34:45 nvmf_rdma -- nvmf/nvmf.sh@103 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=rdma 00:25:32.867 20:34:45 nvmf_rdma -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:25:32.867 20:34:45 nvmf_rdma -- common/autotest_common.sh@1103 -- # xtrace_disable 00:25:32.867 20:34:45 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:25:32.867 ************************************ 00:25:32.867 START TEST nvmf_identify_kernel_target 00:25:32.867 ************************************ 00:25:32.867 20:34:45 nvmf_rdma.nvmf_identify_kernel_target -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=rdma 00:25:32.867 * Looking for test storage... 00:25:32.867 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:25:32.867 20:34:45 nvmf_rdma.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:25:32.867 20:34:45 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:25:32.867 20:34:45 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:32.867 20:34:45 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:32.867 20:34:45 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:32.867 20:34:45 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:32.867 20:34:45 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:32.867 20:34:45 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:32.867 20:34:45 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:32.867 20:34:45 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:32.867 20:34:45 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:32.867 20:34:45 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:32.867 20:34:45 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:25:32.867 20:34:45 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:25:32.867 20:34:45 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:32.867 20:34:45 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:32.867 20:34:45 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:32.867 20:34:45 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:32.867 20:34:45 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:25:32.867 20:34:45 nvmf_rdma.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:32.867 20:34:45 nvmf_rdma.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:32.867 20:34:45 nvmf_rdma.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:32.867 20:34:45 nvmf_rdma.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:32.867 20:34:45 nvmf_rdma.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:32.867 20:34:45 nvmf_rdma.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:32.867 20:34:45 nvmf_rdma.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:25:32.867 20:34:45 nvmf_rdma.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:32.867 20:34:45 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:25:32.867 20:34:45 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:32.867 20:34:45 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:32.868 20:34:45 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:32.868 20:34:45 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:33.126 20:34:45 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:33.126 20:34:45 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:33.126 20:34:45 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:33.126 20:34:45 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:33.126 20:34:45 nvmf_rdma.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:25:33.126 20:34:45 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:25:33.126 20:34:45 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:33.126 20:34:45 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:33.126 20:34:45 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:33.126 20:34:45 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:33.126 20:34:45 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:33.126 20:34:45 nvmf_rdma.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:33.126 20:34:45 nvmf_rdma.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:33.126 20:34:45 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:33.126 20:34:45 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:33.126 20:34:45 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@285 -- # xtrace_disable 00:25:33.126 20:34:45 nvmf_rdma.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:25:39.684 20:34:51 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:39.684 20:34:51 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # pci_devs=() 00:25:39.684 20:34:51 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:39.684 20:34:51 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:39.684 20:34:51 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:39.684 20:34:51 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:39.684 20:34:51 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:39.684 20:34:51 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # net_devs=() 00:25:39.684 20:34:51 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:39.684 20:34:51 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # e810=() 00:25:39.684 20:34:51 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # local -ga e810 00:25:39.684 20:34:51 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # x722=() 00:25:39.684 20:34:51 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # local -ga x722 00:25:39.684 20:34:51 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # mlx=() 00:25:39.684 20:34:51 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # local -ga mlx 00:25:39.684 20:34:51 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:39.684 20:34:51 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:39.684 20:34:51 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:39.684 20:34:51 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:39.684 20:34:51 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:39.684 20:34:51 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:39.684 20:34:51 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:39.684 20:34:51 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:39.684 20:34:51 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:39.684 20:34:51 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:39.684 20:34:51 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:39.684 20:34:51 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:39.684 20:34:51 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:25:39.684 20:34:51 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:25:39.684 20:34:51 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:25:39.684 20:34:51 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:25:39.684 20:34:51 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:25:39.684 20:34:51 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:39.684 20:34:51 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:39.684 20:34:51 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:25:39.684 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:25:39.684 20:34:51 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:25:39.684 20:34:51 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:25:39.684 20:34:51 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:25:39.684 20:34:51 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:25:39.684 20:34:51 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:25:39.684 20:34:51 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:25:39.684 20:34:51 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:39.684 20:34:51 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:25:39.684 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:25:39.684 20:34:51 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:25:39.684 20:34:51 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:25:39.684 20:34:51 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:25:39.684 20:34:51 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:25:39.684 20:34:51 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:25:39.684 20:34:51 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:25:39.684 20:34:51 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:39.684 20:34:51 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:25:39.684 20:34:51 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:39.684 20:34:51 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:39.684 20:34:51 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:25:39.684 20:34:51 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:39.684 20:34:51 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:39.685 20:34:51 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:25:39.685 Found net devices under 0000:da:00.0: mlx_0_0 00:25:39.685 20:34:51 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:39.685 20:34:51 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:39.685 20:34:51 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:39.685 20:34:51 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:25:39.685 20:34:51 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:39.685 20:34:51 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:39.685 20:34:51 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:25:39.685 Found net devices under 0000:da:00.1: mlx_0_1 00:25:39.685 20:34:51 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:39.685 20:34:51 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:39.685 20:34:51 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # is_hw=yes 00:25:39.685 20:34:51 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:39.685 20:34:51 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:25:39.685 20:34:51 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:25:39.685 20:34:51 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@420 -- # rdma_device_init 00:25:39.685 20:34:51 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:25:39.685 20:34:51 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@58 -- # uname 00:25:39.685 20:34:51 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:25:39.685 20:34:51 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@62 -- # modprobe ib_cm 00:25:39.685 20:34:51 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@63 -- # modprobe ib_core 00:25:39.685 20:34:51 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@64 -- # modprobe ib_umad 00:25:39.685 20:34:51 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:25:39.685 20:34:51 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@66 -- # modprobe iw_cm 00:25:39.685 20:34:51 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:25:39.685 20:34:51 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:25:39.685 20:34:51 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # allocate_nic_ips 00:25:39.685 20:34:51 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:25:39.685 20:34:51 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@73 -- # get_rdma_if_list 00:25:39.685 20:34:51 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:25:39.685 20:34:51 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:25:39.685 20:34:51 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:25:39.685 20:34:51 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:25:39.685 20:34:51 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:25:39.685 20:34:51 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:25:39.685 20:34:51 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:39.685 20:34:51 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:25:39.685 20:34:51 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@104 -- # echo mlx_0_0 00:25:39.685 20:34:51 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@105 -- # continue 2 00:25:39.685 20:34:51 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:25:39.685 20:34:51 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:39.685 20:34:51 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:25:39.685 20:34:51 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:39.685 20:34:51 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:25:39.685 20:34:51 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@104 -- # echo mlx_0_1 00:25:39.685 20:34:51 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@105 -- # continue 2 00:25:39.685 20:34:51 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:25:39.685 20:34:51 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:25:39.685 20:34:51 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:25:39.685 20:34:51 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:25:39.685 20:34:51 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:25:39.685 20:34:51 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:25:39.685 20:34:51 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:25:39.685 20:34:51 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:25:39.685 20:34:51 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:25:39.685 262: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:25:39.685 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:25:39.685 altname enp218s0f0np0 00:25:39.685 altname ens818f0np0 00:25:39.685 inet 192.168.100.8/24 scope global mlx_0_0 00:25:39.685 valid_lft forever preferred_lft forever 00:25:39.685 20:34:51 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:25:39.685 20:34:51 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:25:39.685 20:34:51 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:25:39.685 20:34:51 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:25:39.685 20:34:51 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:25:39.685 20:34:51 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:25:39.685 20:34:51 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:25:39.685 20:34:51 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:25:39.685 20:34:51 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:25:39.685 263: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:25:39.685 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:25:39.685 altname enp218s0f1np1 00:25:39.685 altname ens818f1np1 00:25:39.685 inet 192.168.100.9/24 scope global mlx_0_1 00:25:39.685 valid_lft forever preferred_lft forever 00:25:39.685 20:34:51 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # return 0 00:25:39.685 20:34:51 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:39.685 20:34:51 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:25:39.685 20:34:51 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:25:39.685 20:34:51 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:25:39.685 20:34:51 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@86 -- # get_rdma_if_list 00:25:39.685 20:34:51 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:25:39.685 20:34:51 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:25:39.685 20:34:51 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:25:39.685 20:34:51 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:25:39.685 20:34:51 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:25:39.685 20:34:51 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:25:39.685 20:34:51 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:39.685 20:34:51 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:25:39.685 20:34:51 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@104 -- # echo mlx_0_0 00:25:39.685 20:34:51 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@105 -- # continue 2 00:25:39.685 20:34:51 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:25:39.686 20:34:51 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:39.686 20:34:51 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:25:39.686 20:34:51 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:39.686 20:34:51 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:25:39.686 20:34:51 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@104 -- # echo mlx_0_1 00:25:39.686 20:34:51 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@105 -- # continue 2 00:25:39.686 20:34:51 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:25:39.686 20:34:51 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:25:39.686 20:34:51 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:25:39.686 20:34:51 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:25:39.686 20:34:51 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:25:39.686 20:34:51 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:25:39.686 20:34:51 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:25:39.686 20:34:51 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:25:39.686 20:34:51 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:25:39.686 20:34:51 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:25:39.686 20:34:51 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:25:39.686 20:34:51 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:25:39.686 20:34:51 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:25:39.686 192.168.100.9' 00:25:39.686 20:34:51 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:25:39.686 192.168.100.9' 00:25:39.686 20:34:51 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@457 -- # head -n 1 00:25:39.686 20:34:51 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:25:39.686 20:34:51 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:25:39.686 192.168.100.9' 00:25:39.686 20:34:51 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@458 -- # tail -n +2 00:25:39.686 20:34:51 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@458 -- # head -n 1 00:25:39.686 20:34:51 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:25:39.686 20:34:51 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:25:39.686 20:34:51 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:25:39.686 20:34:51 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:25:39.686 20:34:51 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:25:39.686 20:34:51 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:25:39.686 20:34:51 nvmf_rdma.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:25:39.686 20:34:51 nvmf_rdma.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:25:39.686 20:34:51 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:25:39.686 20:34:51 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:39.686 20:34:51 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:39.686 20:34:51 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:39.686 20:34:51 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:39.686 20:34:51 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:25:39.686 20:34:51 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:25:39.686 20:34:51 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:25:39.686 20:34:51 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:25:39.686 20:34:51 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:25:39.686 20:34:51 nvmf_rdma.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=192.168.100.8 00:25:39.686 20:34:51 nvmf_rdma.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 192.168.100.8 00:25:39.686 20:34:51 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=192.168.100.8 00:25:39.686 20:34:51 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:25:39.686 20:34:51 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:39.686 20:34:51 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:25:39.686 20:34:51 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:25:39.686 20:34:51 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:25:39.686 20:34:51 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:25:39.686 20:34:51 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:25:39.686 20:34:51 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:25:39.686 20:34:51 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:25:42.215 Waiting for block devices as requested 00:25:42.215 0000:5f:00.0 (8086 0a54): vfio-pci -> nvme 00:25:42.215 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:25:42.215 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:25:42.472 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:25:42.472 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:25:42.472 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:25:42.472 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:25:42.730 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:25:42.730 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:25:42.730 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:25:42.730 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:25:42.987 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:25:42.987 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:25:42.987 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:25:43.245 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:25:43.245 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:25:43.245 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:25:43.504 20:34:56 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:25:43.504 20:34:56 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:25:43.504 20:34:56 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:25:43.504 20:34:56 nvmf_rdma.nvmf_identify_kernel_target -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:25:43.504 20:34:56 nvmf_rdma.nvmf_identify_kernel_target -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:25:43.504 20:34:56 nvmf_rdma.nvmf_identify_kernel_target -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:25:43.504 20:34:56 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:25:43.504 20:34:56 nvmf_rdma.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:25:43.504 20:34:56 nvmf_rdma.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:25:43.504 No valid GPT data, bailing 00:25:43.504 20:34:56 nvmf_rdma.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:25:43.504 20:34:56 nvmf_rdma.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:25:43.504 20:34:56 nvmf_rdma.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:25:43.504 20:34:56 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:25:43.504 20:34:56 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:25:43.504 20:34:56 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:43.504 20:34:56 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:25:43.504 20:34:56 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:25:43.504 20:34:56 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:25:43.504 20:34:56 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:25:43.504 20:34:56 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:25:43.504 20:34:56 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:25:43.504 20:34:56 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 192.168.100.8 00:25:43.504 20:34:56 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo rdma 00:25:43.504 20:34:56 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:25:43.504 20:34:56 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:25:43.504 20:34:56 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:25:43.504 20:34:56 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -a 192.168.100.8 -t rdma -s 4420 00:25:43.762 00:25:43.762 Discovery Log Number of Records 2, Generation counter 2 00:25:43.762 =====Discovery Log Entry 0====== 00:25:43.762 trtype: rdma 00:25:43.762 adrfam: ipv4 00:25:43.762 subtype: current discovery subsystem 00:25:43.762 treq: not specified, sq flow control disable supported 00:25:43.762 portid: 1 00:25:43.762 trsvcid: 4420 00:25:43.762 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:25:43.762 traddr: 192.168.100.8 00:25:43.762 eflags: none 00:25:43.762 rdma_prtype: not specified 00:25:43.762 rdma_qptype: connected 00:25:43.762 rdma_cms: rdma-cm 00:25:43.762 rdma_pkey: 0x0000 00:25:43.762 =====Discovery Log Entry 1====== 00:25:43.762 trtype: rdma 00:25:43.762 adrfam: ipv4 00:25:43.762 subtype: nvme subsystem 00:25:43.762 treq: not specified, sq flow control disable supported 00:25:43.762 portid: 1 00:25:43.762 trsvcid: 4420 00:25:43.762 subnqn: nqn.2016-06.io.spdk:testnqn 00:25:43.762 traddr: 192.168.100.8 00:25:43.762 eflags: none 00:25:43.762 rdma_prtype: not specified 00:25:43.762 rdma_qptype: connected 00:25:43.762 rdma_cms: rdma-cm 00:25:43.762 rdma_pkey: 0x0000 00:25:43.762 20:34:56 nvmf_rdma.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 00:25:43.762 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:25:43.762 EAL: No free 2048 kB hugepages reported on node 1 00:25:43.762 ===================================================== 00:25:43.762 NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2014-08.org.nvmexpress.discovery 00:25:43.763 ===================================================== 00:25:43.763 Controller Capabilities/Features 00:25:43.763 ================================ 00:25:43.763 Vendor ID: 0000 00:25:43.763 Subsystem Vendor ID: 0000 00:25:43.763 Serial Number: ca5996768f6f16a8297d 00:25:43.763 Model Number: Linux 00:25:43.763 Firmware Version: 6.7.0-68 00:25:43.763 Recommended Arb Burst: 0 00:25:43.763 IEEE OUI Identifier: 00 00 00 00:25:43.763 Multi-path I/O 00:25:43.763 May have multiple subsystem ports: No 00:25:43.763 May have multiple controllers: No 00:25:43.763 Associated with SR-IOV VF: No 00:25:43.763 Max Data Transfer Size: Unlimited 00:25:43.763 Max Number of Namespaces: 0 00:25:43.763 Max Number of I/O Queues: 1024 00:25:43.763 NVMe Specification Version (VS): 1.3 00:25:43.763 NVMe Specification Version (Identify): 1.3 00:25:43.763 Maximum Queue Entries: 128 00:25:43.763 Contiguous Queues Required: No 00:25:43.763 Arbitration Mechanisms Supported 00:25:43.763 Weighted Round Robin: Not Supported 00:25:43.763 Vendor Specific: Not Supported 00:25:43.763 Reset Timeout: 7500 ms 00:25:43.763 Doorbell Stride: 4 bytes 00:25:43.763 NVM Subsystem Reset: Not Supported 00:25:43.763 Command Sets Supported 00:25:43.763 NVM Command Set: Supported 00:25:43.763 Boot Partition: Not Supported 00:25:43.763 Memory Page Size Minimum: 4096 bytes 00:25:43.763 Memory Page Size Maximum: 4096 bytes 00:25:43.763 Persistent Memory Region: Not Supported 00:25:43.763 Optional Asynchronous Events Supported 00:25:43.763 Namespace Attribute Notices: Not Supported 00:25:43.763 Firmware Activation Notices: Not Supported 00:25:43.763 ANA Change Notices: Not Supported 00:25:43.763 PLE Aggregate Log Change Notices: Not Supported 00:25:43.763 LBA Status Info Alert Notices: Not Supported 00:25:43.763 EGE Aggregate Log Change Notices: Not Supported 00:25:43.763 Normal NVM Subsystem Shutdown event: Not Supported 00:25:43.763 Zone Descriptor Change Notices: Not Supported 00:25:43.763 Discovery Log Change Notices: Supported 00:25:43.763 Controller Attributes 00:25:43.763 128-bit Host Identifier: Not Supported 00:25:43.763 Non-Operational Permissive Mode: Not Supported 00:25:43.763 NVM Sets: Not Supported 00:25:43.763 Read Recovery Levels: Not Supported 00:25:43.763 Endurance Groups: Not Supported 00:25:43.763 Predictable Latency Mode: Not Supported 00:25:43.763 Traffic Based Keep ALive: Not Supported 00:25:43.763 Namespace Granularity: Not Supported 00:25:43.763 SQ Associations: Not Supported 00:25:43.763 UUID List: Not Supported 00:25:43.763 Multi-Domain Subsystem: Not Supported 00:25:43.763 Fixed Capacity Management: Not Supported 00:25:43.763 Variable Capacity Management: Not Supported 00:25:43.763 Delete Endurance Group: Not Supported 00:25:43.763 Delete NVM Set: Not Supported 00:25:43.763 Extended LBA Formats Supported: Not Supported 00:25:43.763 Flexible Data Placement Supported: Not Supported 00:25:43.763 00:25:43.763 Controller Memory Buffer Support 00:25:43.763 ================================ 00:25:43.763 Supported: No 00:25:43.763 00:25:43.763 Persistent Memory Region Support 00:25:43.763 ================================ 00:25:43.763 Supported: No 00:25:43.763 00:25:43.763 Admin Command Set Attributes 00:25:43.763 ============================ 00:25:43.763 Security Send/Receive: Not Supported 00:25:43.763 Format NVM: Not Supported 00:25:43.763 Firmware Activate/Download: Not Supported 00:25:43.763 Namespace Management: Not Supported 00:25:43.763 Device Self-Test: Not Supported 00:25:43.763 Directives: Not Supported 00:25:43.763 NVMe-MI: Not Supported 00:25:43.763 Virtualization Management: Not Supported 00:25:43.763 Doorbell Buffer Config: Not Supported 00:25:43.763 Get LBA Status Capability: Not Supported 00:25:43.763 Command & Feature Lockdown Capability: Not Supported 00:25:43.763 Abort Command Limit: 1 00:25:43.763 Async Event Request Limit: 1 00:25:43.763 Number of Firmware Slots: N/A 00:25:43.763 Firmware Slot 1 Read-Only: N/A 00:25:43.763 Firmware Activation Without Reset: N/A 00:25:43.763 Multiple Update Detection Support: N/A 00:25:43.763 Firmware Update Granularity: No Information Provided 00:25:43.763 Per-Namespace SMART Log: No 00:25:43.763 Asymmetric Namespace Access Log Page: Not Supported 00:25:43.763 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:25:43.763 Command Effects Log Page: Not Supported 00:25:43.763 Get Log Page Extended Data: Supported 00:25:43.763 Telemetry Log Pages: Not Supported 00:25:43.763 Persistent Event Log Pages: Not Supported 00:25:43.763 Supported Log Pages Log Page: May Support 00:25:43.763 Commands Supported & Effects Log Page: Not Supported 00:25:43.763 Feature Identifiers & Effects Log Page:May Support 00:25:43.763 NVMe-MI Commands & Effects Log Page: May Support 00:25:43.763 Data Area 4 for Telemetry Log: Not Supported 00:25:43.763 Error Log Page Entries Supported: 1 00:25:43.763 Keep Alive: Not Supported 00:25:43.763 00:25:43.763 NVM Command Set Attributes 00:25:43.763 ========================== 00:25:43.763 Submission Queue Entry Size 00:25:43.763 Max: 1 00:25:43.763 Min: 1 00:25:43.763 Completion Queue Entry Size 00:25:43.763 Max: 1 00:25:43.763 Min: 1 00:25:43.763 Number of Namespaces: 0 00:25:43.763 Compare Command: Not Supported 00:25:43.763 Write Uncorrectable Command: Not Supported 00:25:43.763 Dataset Management Command: Not Supported 00:25:43.763 Write Zeroes Command: Not Supported 00:25:43.763 Set Features Save Field: Not Supported 00:25:43.763 Reservations: Not Supported 00:25:43.763 Timestamp: Not Supported 00:25:43.763 Copy: Not Supported 00:25:43.763 Volatile Write Cache: Not Present 00:25:43.763 Atomic Write Unit (Normal): 1 00:25:43.763 Atomic Write Unit (PFail): 1 00:25:43.763 Atomic Compare & Write Unit: 1 00:25:43.763 Fused Compare & Write: Not Supported 00:25:43.763 Scatter-Gather List 00:25:43.763 SGL Command Set: Supported 00:25:43.763 SGL Keyed: Supported 00:25:43.763 SGL Bit Bucket Descriptor: Not Supported 00:25:43.763 SGL Metadata Pointer: Not Supported 00:25:43.763 Oversized SGL: Not Supported 00:25:43.763 SGL Metadata Address: Not Supported 00:25:43.763 SGL Offset: Supported 00:25:43.763 Transport SGL Data Block: Not Supported 00:25:43.763 Replay Protected Memory Block: Not Supported 00:25:43.763 00:25:43.763 Firmware Slot Information 00:25:43.763 ========================= 00:25:43.763 Active slot: 0 00:25:43.763 00:25:43.763 00:25:43.763 Error Log 00:25:43.763 ========= 00:25:43.763 00:25:43.763 Active Namespaces 00:25:43.763 ================= 00:25:43.763 Discovery Log Page 00:25:43.763 ================== 00:25:43.763 Generation Counter: 2 00:25:43.763 Number of Records: 2 00:25:43.763 Record Format: 0 00:25:43.763 00:25:43.763 Discovery Log Entry 0 00:25:43.763 ---------------------- 00:25:43.763 Transport Type: 1 (RDMA) 00:25:43.763 Address Family: 1 (IPv4) 00:25:43.763 Subsystem Type: 3 (Current Discovery Subsystem) 00:25:43.763 Entry Flags: 00:25:43.763 Duplicate Returned Information: 0 00:25:43.763 Explicit Persistent Connection Support for Discovery: 0 00:25:43.763 Transport Requirements: 00:25:43.763 Secure Channel: Not Specified 00:25:43.763 Port ID: 1 (0x0001) 00:25:43.763 Controller ID: 65535 (0xffff) 00:25:43.763 Admin Max SQ Size: 32 00:25:43.763 Transport Service Identifier: 4420 00:25:43.763 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:25:43.763 Transport Address: 192.168.100.8 00:25:43.763 Transport Specific Address Subtype - RDMA 00:25:43.763 RDMA QP Service Type: 1 (Reliable Connected) 00:25:43.763 RDMA Provider Type: 1 (No provider specified) 00:25:43.763 RDMA CM Service: 1 (RDMA_CM) 00:25:43.763 Discovery Log Entry 1 00:25:43.763 ---------------------- 00:25:43.763 Transport Type: 1 (RDMA) 00:25:43.763 Address Family: 1 (IPv4) 00:25:43.763 Subsystem Type: 2 (NVM Subsystem) 00:25:43.763 Entry Flags: 00:25:43.763 Duplicate Returned Information: 0 00:25:43.763 Explicit Persistent Connection Support for Discovery: 0 00:25:43.763 Transport Requirements: 00:25:43.763 Secure Channel: Not Specified 00:25:43.763 Port ID: 1 (0x0001) 00:25:43.763 Controller ID: 65535 (0xffff) 00:25:43.763 Admin Max SQ Size: 32 00:25:43.763 Transport Service Identifier: 4420 00:25:43.763 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:25:43.763 Transport Address: 192.168.100.8 00:25:43.763 Transport Specific Address Subtype - RDMA 00:25:43.763 RDMA QP Service Type: 1 (Reliable Connected) 00:25:43.763 RDMA Provider Type: 1 (No provider specified) 00:25:43.763 RDMA CM Service: 1 (RDMA_CM) 00:25:43.763 20:34:56 nvmf_rdma.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:25:43.763 EAL: No free 2048 kB hugepages reported on node 1 00:25:44.022 get_feature(0x01) failed 00:25:44.022 get_feature(0x02) failed 00:25:44.022 get_feature(0x04) failed 00:25:44.022 ===================================================== 00:25:44.022 NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:testnqn 00:25:44.022 ===================================================== 00:25:44.022 Controller Capabilities/Features 00:25:44.022 ================================ 00:25:44.022 Vendor ID: 0000 00:25:44.022 Subsystem Vendor ID: 0000 00:25:44.022 Serial Number: b0dcc1be07ee5b319af2 00:25:44.022 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:25:44.022 Firmware Version: 6.7.0-68 00:25:44.022 Recommended Arb Burst: 6 00:25:44.022 IEEE OUI Identifier: 00 00 00 00:25:44.022 Multi-path I/O 00:25:44.022 May have multiple subsystem ports: Yes 00:25:44.022 May have multiple controllers: Yes 00:25:44.022 Associated with SR-IOV VF: No 00:25:44.022 Max Data Transfer Size: 1048576 00:25:44.022 Max Number of Namespaces: 1024 00:25:44.022 Max Number of I/O Queues: 128 00:25:44.022 NVMe Specification Version (VS): 1.3 00:25:44.022 NVMe Specification Version (Identify): 1.3 00:25:44.022 Maximum Queue Entries: 128 00:25:44.022 Contiguous Queues Required: No 00:25:44.022 Arbitration Mechanisms Supported 00:25:44.022 Weighted Round Robin: Not Supported 00:25:44.022 Vendor Specific: Not Supported 00:25:44.022 Reset Timeout: 7500 ms 00:25:44.022 Doorbell Stride: 4 bytes 00:25:44.022 NVM Subsystem Reset: Not Supported 00:25:44.022 Command Sets Supported 00:25:44.022 NVM Command Set: Supported 00:25:44.022 Boot Partition: Not Supported 00:25:44.022 Memory Page Size Minimum: 4096 bytes 00:25:44.022 Memory Page Size Maximum: 4096 bytes 00:25:44.022 Persistent Memory Region: Not Supported 00:25:44.022 Optional Asynchronous Events Supported 00:25:44.022 Namespace Attribute Notices: Supported 00:25:44.022 Firmware Activation Notices: Not Supported 00:25:44.022 ANA Change Notices: Supported 00:25:44.022 PLE Aggregate Log Change Notices: Not Supported 00:25:44.022 LBA Status Info Alert Notices: Not Supported 00:25:44.022 EGE Aggregate Log Change Notices: Not Supported 00:25:44.022 Normal NVM Subsystem Shutdown event: Not Supported 00:25:44.022 Zone Descriptor Change Notices: Not Supported 00:25:44.022 Discovery Log Change Notices: Not Supported 00:25:44.022 Controller Attributes 00:25:44.022 128-bit Host Identifier: Supported 00:25:44.022 Non-Operational Permissive Mode: Not Supported 00:25:44.022 NVM Sets: Not Supported 00:25:44.022 Read Recovery Levels: Not Supported 00:25:44.022 Endurance Groups: Not Supported 00:25:44.022 Predictable Latency Mode: Not Supported 00:25:44.022 Traffic Based Keep ALive: Supported 00:25:44.022 Namespace Granularity: Not Supported 00:25:44.022 SQ Associations: Not Supported 00:25:44.022 UUID List: Not Supported 00:25:44.022 Multi-Domain Subsystem: Not Supported 00:25:44.022 Fixed Capacity Management: Not Supported 00:25:44.022 Variable Capacity Management: Not Supported 00:25:44.022 Delete Endurance Group: Not Supported 00:25:44.022 Delete NVM Set: Not Supported 00:25:44.022 Extended LBA Formats Supported: Not Supported 00:25:44.022 Flexible Data Placement Supported: Not Supported 00:25:44.022 00:25:44.022 Controller Memory Buffer Support 00:25:44.022 ================================ 00:25:44.022 Supported: No 00:25:44.022 00:25:44.022 Persistent Memory Region Support 00:25:44.022 ================================ 00:25:44.022 Supported: No 00:25:44.022 00:25:44.022 Admin Command Set Attributes 00:25:44.022 ============================ 00:25:44.022 Security Send/Receive: Not Supported 00:25:44.022 Format NVM: Not Supported 00:25:44.022 Firmware Activate/Download: Not Supported 00:25:44.022 Namespace Management: Not Supported 00:25:44.022 Device Self-Test: Not Supported 00:25:44.022 Directives: Not Supported 00:25:44.022 NVMe-MI: Not Supported 00:25:44.022 Virtualization Management: Not Supported 00:25:44.022 Doorbell Buffer Config: Not Supported 00:25:44.022 Get LBA Status Capability: Not Supported 00:25:44.022 Command & Feature Lockdown Capability: Not Supported 00:25:44.022 Abort Command Limit: 4 00:25:44.022 Async Event Request Limit: 4 00:25:44.022 Number of Firmware Slots: N/A 00:25:44.022 Firmware Slot 1 Read-Only: N/A 00:25:44.022 Firmware Activation Without Reset: N/A 00:25:44.022 Multiple Update Detection Support: N/A 00:25:44.022 Firmware Update Granularity: No Information Provided 00:25:44.022 Per-Namespace SMART Log: Yes 00:25:44.022 Asymmetric Namespace Access Log Page: Supported 00:25:44.022 ANA Transition Time : 10 sec 00:25:44.022 00:25:44.022 Asymmetric Namespace Access Capabilities 00:25:44.022 ANA Optimized State : Supported 00:25:44.022 ANA Non-Optimized State : Supported 00:25:44.022 ANA Inaccessible State : Supported 00:25:44.022 ANA Persistent Loss State : Supported 00:25:44.022 ANA Change State : Supported 00:25:44.022 ANAGRPID is not changed : No 00:25:44.022 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:25:44.022 00:25:44.022 ANA Group Identifier Maximum : 128 00:25:44.022 Number of ANA Group Identifiers : 128 00:25:44.022 Max Number of Allowed Namespaces : 1024 00:25:44.022 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:25:44.022 Command Effects Log Page: Supported 00:25:44.022 Get Log Page Extended Data: Supported 00:25:44.022 Telemetry Log Pages: Not Supported 00:25:44.022 Persistent Event Log Pages: Not Supported 00:25:44.022 Supported Log Pages Log Page: May Support 00:25:44.022 Commands Supported & Effects Log Page: Not Supported 00:25:44.022 Feature Identifiers & Effects Log Page:May Support 00:25:44.022 NVMe-MI Commands & Effects Log Page: May Support 00:25:44.022 Data Area 4 for Telemetry Log: Not Supported 00:25:44.022 Error Log Page Entries Supported: 128 00:25:44.022 Keep Alive: Supported 00:25:44.022 Keep Alive Granularity: 1000 ms 00:25:44.022 00:25:44.022 NVM Command Set Attributes 00:25:44.022 ========================== 00:25:44.022 Submission Queue Entry Size 00:25:44.022 Max: 64 00:25:44.022 Min: 64 00:25:44.022 Completion Queue Entry Size 00:25:44.022 Max: 16 00:25:44.022 Min: 16 00:25:44.022 Number of Namespaces: 1024 00:25:44.022 Compare Command: Not Supported 00:25:44.022 Write Uncorrectable Command: Not Supported 00:25:44.022 Dataset Management Command: Supported 00:25:44.022 Write Zeroes Command: Supported 00:25:44.022 Set Features Save Field: Not Supported 00:25:44.022 Reservations: Not Supported 00:25:44.022 Timestamp: Not Supported 00:25:44.022 Copy: Not Supported 00:25:44.022 Volatile Write Cache: Present 00:25:44.022 Atomic Write Unit (Normal): 1 00:25:44.022 Atomic Write Unit (PFail): 1 00:25:44.022 Atomic Compare & Write Unit: 1 00:25:44.022 Fused Compare & Write: Not Supported 00:25:44.022 Scatter-Gather List 00:25:44.022 SGL Command Set: Supported 00:25:44.022 SGL Keyed: Supported 00:25:44.022 SGL Bit Bucket Descriptor: Not Supported 00:25:44.023 SGL Metadata Pointer: Not Supported 00:25:44.023 Oversized SGL: Not Supported 00:25:44.023 SGL Metadata Address: Not Supported 00:25:44.023 SGL Offset: Supported 00:25:44.023 Transport SGL Data Block: Not Supported 00:25:44.023 Replay Protected Memory Block: Not Supported 00:25:44.023 00:25:44.023 Firmware Slot Information 00:25:44.023 ========================= 00:25:44.023 Active slot: 0 00:25:44.023 00:25:44.023 Asymmetric Namespace Access 00:25:44.023 =========================== 00:25:44.023 Change Count : 0 00:25:44.023 Number of ANA Group Descriptors : 1 00:25:44.023 ANA Group Descriptor : 0 00:25:44.023 ANA Group ID : 1 00:25:44.023 Number of NSID Values : 1 00:25:44.023 Change Count : 0 00:25:44.023 ANA State : 1 00:25:44.023 Namespace Identifier : 1 00:25:44.023 00:25:44.023 Commands Supported and Effects 00:25:44.023 ============================== 00:25:44.023 Admin Commands 00:25:44.023 -------------- 00:25:44.023 Get Log Page (02h): Supported 00:25:44.023 Identify (06h): Supported 00:25:44.023 Abort (08h): Supported 00:25:44.023 Set Features (09h): Supported 00:25:44.023 Get Features (0Ah): Supported 00:25:44.023 Asynchronous Event Request (0Ch): Supported 00:25:44.023 Keep Alive (18h): Supported 00:25:44.023 I/O Commands 00:25:44.023 ------------ 00:25:44.023 Flush (00h): Supported 00:25:44.023 Write (01h): Supported LBA-Change 00:25:44.023 Read (02h): Supported 00:25:44.023 Write Zeroes (08h): Supported LBA-Change 00:25:44.023 Dataset Management (09h): Supported 00:25:44.023 00:25:44.023 Error Log 00:25:44.023 ========= 00:25:44.023 Entry: 0 00:25:44.023 Error Count: 0x3 00:25:44.023 Submission Queue Id: 0x0 00:25:44.023 Command Id: 0x5 00:25:44.023 Phase Bit: 0 00:25:44.023 Status Code: 0x2 00:25:44.023 Status Code Type: 0x0 00:25:44.023 Do Not Retry: 1 00:25:44.023 Error Location: 0x28 00:25:44.023 LBA: 0x0 00:25:44.023 Namespace: 0x0 00:25:44.023 Vendor Log Page: 0x0 00:25:44.023 ----------- 00:25:44.023 Entry: 1 00:25:44.023 Error Count: 0x2 00:25:44.023 Submission Queue Id: 0x0 00:25:44.023 Command Id: 0x5 00:25:44.023 Phase Bit: 0 00:25:44.023 Status Code: 0x2 00:25:44.023 Status Code Type: 0x0 00:25:44.023 Do Not Retry: 1 00:25:44.023 Error Location: 0x28 00:25:44.023 LBA: 0x0 00:25:44.023 Namespace: 0x0 00:25:44.023 Vendor Log Page: 0x0 00:25:44.023 ----------- 00:25:44.023 Entry: 2 00:25:44.023 Error Count: 0x1 00:25:44.023 Submission Queue Id: 0x0 00:25:44.023 Command Id: 0x0 00:25:44.023 Phase Bit: 0 00:25:44.023 Status Code: 0x2 00:25:44.023 Status Code Type: 0x0 00:25:44.023 Do Not Retry: 1 00:25:44.023 Error Location: 0x28 00:25:44.023 LBA: 0x0 00:25:44.023 Namespace: 0x0 00:25:44.023 Vendor Log Page: 0x0 00:25:44.023 00:25:44.023 Number of Queues 00:25:44.023 ================ 00:25:44.023 Number of I/O Submission Queues: 128 00:25:44.023 Number of I/O Completion Queues: 128 00:25:44.023 00:25:44.023 ZNS Specific Controller Data 00:25:44.023 ============================ 00:25:44.023 Zone Append Size Limit: 0 00:25:44.023 00:25:44.023 00:25:44.023 Active Namespaces 00:25:44.023 ================= 00:25:44.023 get_feature(0x05) failed 00:25:44.023 Namespace ID:1 00:25:44.023 Command Set Identifier: NVM (00h) 00:25:44.023 Deallocate: Supported 00:25:44.023 Deallocated/Unwritten Error: Not Supported 00:25:44.023 Deallocated Read Value: Unknown 00:25:44.023 Deallocate in Write Zeroes: Not Supported 00:25:44.023 Deallocated Guard Field: 0xFFFF 00:25:44.023 Flush: Supported 00:25:44.023 Reservation: Not Supported 00:25:44.023 Namespace Sharing Capabilities: Multiple Controllers 00:25:44.023 Size (in LBAs): 3125627568 (1490GiB) 00:25:44.023 Capacity (in LBAs): 3125627568 (1490GiB) 00:25:44.023 Utilization (in LBAs): 3125627568 (1490GiB) 00:25:44.023 UUID: 0a0cdee0-481e-4ddd-905e-c8e44ad46286 00:25:44.023 Thin Provisioning: Not Supported 00:25:44.023 Per-NS Atomic Units: Yes 00:25:44.023 Atomic Boundary Size (Normal): 0 00:25:44.023 Atomic Boundary Size (PFail): 0 00:25:44.023 Atomic Boundary Offset: 0 00:25:44.023 NGUID/EUI64 Never Reused: No 00:25:44.023 ANA group ID: 1 00:25:44.023 Namespace Write Protected: No 00:25:44.023 Number of LBA Formats: 1 00:25:44.023 Current LBA Format: LBA Format #00 00:25:44.023 LBA Format #00: Data Size: 512 Metadata Size: 0 00:25:44.023 00:25:44.023 20:34:56 nvmf_rdma.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:25:44.023 20:34:56 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:44.023 20:34:56 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:25:44.023 20:34:56 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:25:44.023 20:34:56 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:25:44.023 20:34:56 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:25:44.023 20:34:56 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:44.023 20:34:56 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:25:44.023 rmmod nvme_rdma 00:25:44.023 rmmod nvme_fabrics 00:25:44.023 20:34:56 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:44.023 20:34:56 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:25:44.023 20:34:56 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:25:44.023 20:34:56 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:25:44.023 20:34:56 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:44.023 20:34:56 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:25:44.023 20:34:56 nvmf_rdma.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:25:44.023 20:34:56 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:25:44.023 20:34:56 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:25:44.023 20:34:56 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:44.023 20:34:56 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:25:44.023 20:34:56 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:25:44.023 20:34:56 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:44.023 20:34:56 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:25:44.023 20:34:56 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_rdma nvmet 00:25:44.023 20:34:56 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:25:47.301 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:25:47.301 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:25:47.301 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:25:47.301 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:25:47.301 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:25:47.301 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:25:47.301 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:25:47.301 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:25:47.301 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:25:47.301 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:25:47.301 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:25:47.301 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:25:47.301 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:25:47.301 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:25:47.301 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:25:47.301 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:25:48.736 0000:5f:00.0 (8086 0a54): nvme -> vfio-pci 00:25:48.736 00:25:48.736 real 0m15.834s 00:25:48.736 user 0m4.493s 00:25:48.736 sys 0m9.153s 00:25:48.736 20:35:01 nvmf_rdma.nvmf_identify_kernel_target -- common/autotest_common.sh@1122 -- # xtrace_disable 00:25:48.736 20:35:01 nvmf_rdma.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:25:48.736 ************************************ 00:25:48.736 END TEST nvmf_identify_kernel_target 00:25:48.736 ************************************ 00:25:48.736 20:35:01 nvmf_rdma -- nvmf/nvmf.sh@104 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=rdma 00:25:48.736 20:35:01 nvmf_rdma -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:25:48.736 20:35:01 nvmf_rdma -- common/autotest_common.sh@1103 -- # xtrace_disable 00:25:48.736 20:35:01 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:25:48.736 ************************************ 00:25:48.736 START TEST nvmf_auth_host 00:25:48.736 ************************************ 00:25:48.736 20:35:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=rdma 00:25:48.736 * Looking for test storage... 00:25:48.994 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:25:48.994 20:35:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:25:48.994 20:35:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:25:48.994 20:35:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:48.994 20:35:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:48.994 20:35:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:48.994 20:35:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:48.994 20:35:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:48.994 20:35:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:48.994 20:35:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:48.994 20:35:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:48.994 20:35:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:48.994 20:35:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:48.994 20:35:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:25:48.994 20:35:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:25:48.994 20:35:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:48.994 20:35:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:48.994 20:35:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:48.994 20:35:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:48.994 20:35:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:25:48.994 20:35:01 nvmf_rdma.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:48.994 20:35:01 nvmf_rdma.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:48.994 20:35:01 nvmf_rdma.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:48.994 20:35:01 nvmf_rdma.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:48.994 20:35:01 nvmf_rdma.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:48.995 20:35:01 nvmf_rdma.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:48.995 20:35:01 nvmf_rdma.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:25:48.995 20:35:01 nvmf_rdma.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:48.995 20:35:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:25:48.995 20:35:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:48.995 20:35:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:48.995 20:35:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:48.995 20:35:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:48.995 20:35:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:48.995 20:35:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:48.995 20:35:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:48.995 20:35:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:48.995 20:35:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:25:48.995 20:35:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:25:48.995 20:35:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:25:48.995 20:35:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:25:48.995 20:35:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:48.995 20:35:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:25:48.995 20:35:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:25:48.995 20:35:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:25:48.995 20:35:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:25:48.995 20:35:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:25:48.995 20:35:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:48.995 20:35:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:48.995 20:35:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:48.995 20:35:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:48.995 20:35:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:48.995 20:35:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:48.995 20:35:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:48.995 20:35:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:48.995 20:35:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:48.995 20:35:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@285 -- # xtrace_disable 00:25:48.995 20:35:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.548 20:35:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:55.548 20:35:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@291 -- # pci_devs=() 00:25:55.548 20:35:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:55.548 20:35:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:55.548 20:35:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:55.548 20:35:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:55.548 20:35:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:55.548 20:35:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@295 -- # net_devs=() 00:25:55.548 20:35:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:55.548 20:35:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@296 -- # e810=() 00:25:55.548 20:35:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@296 -- # local -ga e810 00:25:55.548 20:35:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@297 -- # x722=() 00:25:55.548 20:35:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@297 -- # local -ga x722 00:25:55.548 20:35:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@298 -- # mlx=() 00:25:55.548 20:35:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@298 -- # local -ga mlx 00:25:55.548 20:35:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:55.548 20:35:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:55.548 20:35:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:55.548 20:35:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:55.548 20:35:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:55.548 20:35:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:55.548 20:35:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:55.548 20:35:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:55.548 20:35:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:55.548 20:35:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:55.548 20:35:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:55.548 20:35:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:55.548 20:35:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:25:55.548 20:35:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:25:55.548 20:35:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:25:55.548 20:35:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:25:55.548 20:35:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:25:55.548 20:35:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:55.548 20:35:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:55.548 20:35:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:25:55.548 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:25:55.548 20:35:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:25:55.548 20:35:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:25:55.548 20:35:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:25:55.548 20:35:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:25:55.548 20:35:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:25:55.548 20:35:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:25:55.548 20:35:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:55.548 20:35:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:25:55.548 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:25:55.548 20:35:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:25:55.548 20:35:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:25:55.548 20:35:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:25:55.548 20:35:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:25:55.548 20:35:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:25:55.548 20:35:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:25:55.548 20:35:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:55.548 20:35:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:25:55.548 20:35:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:55.548 20:35:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:55.548 20:35:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:25:55.548 20:35:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:55.548 20:35:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:55.548 20:35:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:25:55.548 Found net devices under 0000:da:00.0: mlx_0_0 00:25:55.548 20:35:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:55.548 20:35:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:55.548 20:35:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:55.548 20:35:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:25:55.548 20:35:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:55.548 20:35:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:55.548 20:35:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:25:55.548 Found net devices under 0000:da:00.1: mlx_0_1 00:25:55.548 20:35:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:55.548 20:35:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:55.548 20:35:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@414 -- # is_hw=yes 00:25:55.548 20:35:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:55.548 20:35:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:25:55.548 20:35:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:25:55.548 20:35:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@420 -- # rdma_device_init 00:25:55.548 20:35:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:25:55.548 20:35:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@58 -- # uname 00:25:55.548 20:35:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:25:55.548 20:35:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@62 -- # modprobe ib_cm 00:25:55.548 20:35:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@63 -- # modprobe ib_core 00:25:55.548 20:35:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@64 -- # modprobe ib_umad 00:25:55.548 20:35:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:25:55.548 20:35:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@66 -- # modprobe iw_cm 00:25:55.548 20:35:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:25:55.548 20:35:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:25:55.548 20:35:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@502 -- # allocate_nic_ips 00:25:55.548 20:35:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:25:55.548 20:35:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@73 -- # get_rdma_if_list 00:25:55.548 20:35:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:25:55.548 20:35:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:25:55.548 20:35:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:25:55.548 20:35:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:25:55.548 20:35:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:25:55.548 20:35:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:25:55.548 20:35:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:55.548 20:35:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:25:55.548 20:35:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@104 -- # echo mlx_0_0 00:25:55.548 20:35:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@105 -- # continue 2 00:25:55.548 20:35:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:25:55.548 20:35:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:55.548 20:35:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:25:55.548 20:35:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:55.548 20:35:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:25:55.548 20:35:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@104 -- # echo mlx_0_1 00:25:55.548 20:35:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@105 -- # continue 2 00:25:55.548 20:35:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:25:55.548 20:35:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:25:55.548 20:35:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:25:55.548 20:35:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:25:55.548 20:35:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@113 -- # awk '{print $4}' 00:25:55.548 20:35:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@113 -- # cut -d/ -f1 00:25:55.548 20:35:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:25:55.549 20:35:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:25:55.549 20:35:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:25:55.549 262: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:25:55.549 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:25:55.549 altname enp218s0f0np0 00:25:55.549 altname ens818f0np0 00:25:55.549 inet 192.168.100.8/24 scope global mlx_0_0 00:25:55.549 valid_lft forever preferred_lft forever 00:25:55.549 20:35:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:25:55.549 20:35:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:25:55.549 20:35:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:25:55.549 20:35:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:25:55.549 20:35:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@113 -- # awk '{print $4}' 00:25:55.549 20:35:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@113 -- # cut -d/ -f1 00:25:55.549 20:35:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:25:55.549 20:35:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:25:55.549 20:35:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:25:55.549 263: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:25:55.549 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:25:55.549 altname enp218s0f1np1 00:25:55.549 altname ens818f1np1 00:25:55.549 inet 192.168.100.9/24 scope global mlx_0_1 00:25:55.549 valid_lft forever preferred_lft forever 00:25:55.549 20:35:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@422 -- # return 0 00:25:55.549 20:35:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:55.549 20:35:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:25:55.549 20:35:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:25:55.549 20:35:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:25:55.549 20:35:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@86 -- # get_rdma_if_list 00:25:55.549 20:35:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:25:55.549 20:35:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:25:55.549 20:35:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:25:55.549 20:35:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:25:55.549 20:35:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:25:55.549 20:35:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:25:55.549 20:35:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:55.549 20:35:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:25:55.549 20:35:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@104 -- # echo mlx_0_0 00:25:55.549 20:35:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@105 -- # continue 2 00:25:55.549 20:35:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:25:55.549 20:35:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:55.549 20:35:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:25:55.549 20:35:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:55.549 20:35:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:25:55.549 20:35:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@104 -- # echo mlx_0_1 00:25:55.549 20:35:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@105 -- # continue 2 00:25:55.549 20:35:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:25:55.549 20:35:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:25:55.549 20:35:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:25:55.549 20:35:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:25:55.549 20:35:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@113 -- # cut -d/ -f1 00:25:55.549 20:35:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@113 -- # awk '{print $4}' 00:25:55.549 20:35:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:25:55.549 20:35:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:25:55.549 20:35:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:25:55.549 20:35:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:25:55.549 20:35:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@113 -- # awk '{print $4}' 00:25:55.549 20:35:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@113 -- # cut -d/ -f1 00:25:55.549 20:35:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:25:55.549 192.168.100.9' 00:25:55.549 20:35:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:25:55.549 192.168.100.9' 00:25:55.549 20:35:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@457 -- # head -n 1 00:25:55.549 20:35:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:25:55.549 20:35:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:25:55.549 192.168.100.9' 00:25:55.549 20:35:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@458 -- # tail -n +2 00:25:55.549 20:35:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@458 -- # head -n 1 00:25:55.549 20:35:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:25:55.549 20:35:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:25:55.549 20:35:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:25:55.549 20:35:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:25:55.549 20:35:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:25:55.549 20:35:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:25:55.549 20:35:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:25:55.549 20:35:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:55.549 20:35:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@720 -- # xtrace_disable 00:25:55.549 20:35:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.549 20:35:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=3190092 00:25:55.549 20:35:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 3190092 00:25:55.549 20:35:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:25:55.549 20:35:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@827 -- # '[' -z 3190092 ']' 00:25:55.549 20:35:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:55.549 20:35:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@832 -- # local max_retries=100 00:25:55.549 20:35:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:55.549 20:35:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@836 -- # xtrace_disable 00:25:55.549 20:35:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.806 20:35:08 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:25:55.806 20:35:08 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@860 -- # return 0 00:25:55.806 20:35:08 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:55.806 20:35:08 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:55.806 20:35:08 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.064 20:35:08 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:56.064 20:35:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:25:56.064 20:35:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:25:56.064 20:35:08 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:25:56.064 20:35:08 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:56.064 20:35:08 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:25:56.064 20:35:08 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:25:56.064 20:35:08 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:25:56.064 20:35:08 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:56.064 20:35:08 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # key=4d39406e2d1d721a4a92ba3d8b1b354e 00:25:56.064 20:35:08 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:25:56.064 20:35:08 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.YUI 00:25:56.064 20:35:08 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 4d39406e2d1d721a4a92ba3d8b1b354e 0 00:25:56.064 20:35:08 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 4d39406e2d1d721a4a92ba3d8b1b354e 0 00:25:56.064 20:35:08 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:25:56.064 20:35:08 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:56.064 20:35:08 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # key=4d39406e2d1d721a4a92ba3d8b1b354e 00:25:56.064 20:35:08 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:25:56.064 20:35:08 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:25:56.064 20:35:08 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.YUI 00:25:56.064 20:35:08 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.YUI 00:25:56.064 20:35:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.YUI 00:25:56.064 20:35:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:25:56.064 20:35:08 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:25:56.064 20:35:08 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:56.064 20:35:08 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:25:56.064 20:35:08 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:25:56.064 20:35:08 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:25:56.064 20:35:08 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:25:56.064 20:35:08 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # key=acb17e3ff788e270ce52cf99486687d66ffca21399ef2b0cb656f923fd0bff24 00:25:56.064 20:35:08 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:25:56.064 20:35:08 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.eYb 00:25:56.064 20:35:08 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key acb17e3ff788e270ce52cf99486687d66ffca21399ef2b0cb656f923fd0bff24 3 00:25:56.064 20:35:08 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 acb17e3ff788e270ce52cf99486687d66ffca21399ef2b0cb656f923fd0bff24 3 00:25:56.064 20:35:08 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:25:56.064 20:35:08 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:56.064 20:35:08 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # key=acb17e3ff788e270ce52cf99486687d66ffca21399ef2b0cb656f923fd0bff24 00:25:56.064 20:35:08 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:25:56.064 20:35:08 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:25:56.064 20:35:08 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.eYb 00:25:56.064 20:35:08 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.eYb 00:25:56.064 20:35:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.eYb 00:25:56.064 20:35:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:25:56.064 20:35:08 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:25:56.064 20:35:08 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:56.064 20:35:08 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:25:56.064 20:35:08 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:25:56.065 20:35:08 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:25:56.065 20:35:08 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:25:56.065 20:35:08 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # key=50977626994c5991db63eb967d0aba540a897ab0e2c626e1 00:25:56.065 20:35:08 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:25:56.065 20:35:08 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.MMY 00:25:56.065 20:35:08 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 50977626994c5991db63eb967d0aba540a897ab0e2c626e1 0 00:25:56.065 20:35:08 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 50977626994c5991db63eb967d0aba540a897ab0e2c626e1 0 00:25:56.065 20:35:08 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:25:56.065 20:35:08 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:56.065 20:35:08 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # key=50977626994c5991db63eb967d0aba540a897ab0e2c626e1 00:25:56.065 20:35:08 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:25:56.065 20:35:08 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:25:56.065 20:35:08 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.MMY 00:25:56.065 20:35:08 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.MMY 00:25:56.065 20:35:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.MMY 00:25:56.065 20:35:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:25:56.065 20:35:08 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:25:56.065 20:35:08 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:56.065 20:35:08 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:25:56.065 20:35:08 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:25:56.065 20:35:08 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:25:56.065 20:35:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:25:56.065 20:35:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # key=5d70d9b97ca87533a169c4d430f458c4d832c15713ba04ff 00:25:56.065 20:35:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:25:56.065 20:35:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.s01 00:25:56.065 20:35:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 5d70d9b97ca87533a169c4d430f458c4d832c15713ba04ff 2 00:25:56.065 20:35:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 5d70d9b97ca87533a169c4d430f458c4d832c15713ba04ff 2 00:25:56.065 20:35:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:25:56.065 20:35:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:56.065 20:35:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # key=5d70d9b97ca87533a169c4d430f458c4d832c15713ba04ff 00:25:56.065 20:35:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:25:56.065 20:35:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:25:56.322 20:35:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.s01 00:25:56.322 20:35:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.s01 00:25:56.322 20:35:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.s01 00:25:56.322 20:35:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:25:56.323 20:35:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:25:56.323 20:35:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:56.323 20:35:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:25:56.323 20:35:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:25:56.323 20:35:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:25:56.323 20:35:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:56.323 20:35:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # key=761f1ecb3da99b05c25941d7363126bd 00:25:56.323 20:35:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:25:56.323 20:35:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.QOU 00:25:56.323 20:35:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 761f1ecb3da99b05c25941d7363126bd 1 00:25:56.323 20:35:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 761f1ecb3da99b05c25941d7363126bd 1 00:25:56.323 20:35:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:25:56.323 20:35:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:56.323 20:35:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # key=761f1ecb3da99b05c25941d7363126bd 00:25:56.323 20:35:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:25:56.323 20:35:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:25:56.323 20:35:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.QOU 00:25:56.323 20:35:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.QOU 00:25:56.323 20:35:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.QOU 00:25:56.323 20:35:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:25:56.323 20:35:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:25:56.323 20:35:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:56.323 20:35:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:25:56.323 20:35:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:25:56.323 20:35:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:25:56.323 20:35:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:56.323 20:35:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # key=9afdfe2f7a836061ce8a388f1701fe5d 00:25:56.323 20:35:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:25:56.323 20:35:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.twe 00:25:56.323 20:35:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 9afdfe2f7a836061ce8a388f1701fe5d 1 00:25:56.323 20:35:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 9afdfe2f7a836061ce8a388f1701fe5d 1 00:25:56.323 20:35:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:25:56.323 20:35:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:56.323 20:35:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # key=9afdfe2f7a836061ce8a388f1701fe5d 00:25:56.323 20:35:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:25:56.323 20:35:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:25:56.323 20:35:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.twe 00:25:56.323 20:35:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.twe 00:25:56.323 20:35:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.twe 00:25:56.323 20:35:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:25:56.323 20:35:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:25:56.323 20:35:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:56.323 20:35:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:25:56.323 20:35:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:25:56.323 20:35:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:25:56.323 20:35:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:25:56.323 20:35:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # key=cb95d488862c073af60e7371bb25a231232169fbcd436f1a 00:25:56.323 20:35:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:25:56.323 20:35:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.Wrq 00:25:56.323 20:35:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key cb95d488862c073af60e7371bb25a231232169fbcd436f1a 2 00:25:56.323 20:35:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 cb95d488862c073af60e7371bb25a231232169fbcd436f1a 2 00:25:56.323 20:35:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:25:56.323 20:35:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:56.323 20:35:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # key=cb95d488862c073af60e7371bb25a231232169fbcd436f1a 00:25:56.323 20:35:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:25:56.323 20:35:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:25:56.323 20:35:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.Wrq 00:25:56.323 20:35:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.Wrq 00:25:56.323 20:35:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.Wrq 00:25:56.323 20:35:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:25:56.323 20:35:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:25:56.323 20:35:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:56.323 20:35:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:25:56.323 20:35:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:25:56.323 20:35:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:25:56.323 20:35:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:56.323 20:35:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # key=0e673011329d88151e437241e47e682f 00:25:56.323 20:35:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:25:56.323 20:35:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.pap 00:25:56.323 20:35:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 0e673011329d88151e437241e47e682f 0 00:25:56.323 20:35:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 0e673011329d88151e437241e47e682f 0 00:25:56.323 20:35:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:25:56.323 20:35:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:56.323 20:35:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # key=0e673011329d88151e437241e47e682f 00:25:56.323 20:35:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:25:56.323 20:35:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:25:56.323 20:35:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.pap 00:25:56.323 20:35:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.pap 00:25:56.323 20:35:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.pap 00:25:56.323 20:35:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:25:56.323 20:35:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:25:56.323 20:35:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:56.323 20:35:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:25:56.323 20:35:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:25:56.323 20:35:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:25:56.323 20:35:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:25:56.323 20:35:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # key=8450a99459a84145e5a75fb972f1e45c097f6e8fd047b012b787209f88b5d358 00:25:56.323 20:35:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:25:56.581 20:35:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.1ZP 00:25:56.581 20:35:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 8450a99459a84145e5a75fb972f1e45c097f6e8fd047b012b787209f88b5d358 3 00:25:56.581 20:35:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 8450a99459a84145e5a75fb972f1e45c097f6e8fd047b012b787209f88b5d358 3 00:25:56.581 20:35:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:25:56.581 20:35:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:56.581 20:35:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # key=8450a99459a84145e5a75fb972f1e45c097f6e8fd047b012b787209f88b5d358 00:25:56.581 20:35:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:25:56.581 20:35:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:25:56.581 20:35:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.1ZP 00:25:56.581 20:35:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.1ZP 00:25:56.581 20:35:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.1ZP 00:25:56.581 20:35:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:25:56.581 20:35:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 3190092 00:25:56.581 20:35:09 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@827 -- # '[' -z 3190092 ']' 00:25:56.581 20:35:09 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:56.581 20:35:09 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@832 -- # local max_retries=100 00:25:56.581 20:35:09 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:56.581 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:56.581 20:35:09 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@836 -- # xtrace_disable 00:25:56.581 20:35:09 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.581 20:35:09 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:25:56.581 20:35:09 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@860 -- # return 0 00:25:56.581 20:35:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:56.581 20:35:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.YUI 00:25:56.581 20:35:09 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:56.581 20:35:09 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.581 20:35:09 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:56.581 20:35:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.eYb ]] 00:25:56.581 20:35:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.eYb 00:25:56.581 20:35:09 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:56.581 20:35:09 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.581 20:35:09 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:56.581 20:35:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:56.581 20:35:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.MMY 00:25:56.581 20:35:09 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:56.581 20:35:09 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.581 20:35:09 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:56.581 20:35:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.s01 ]] 00:25:56.581 20:35:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.s01 00:25:56.581 20:35:09 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:56.581 20:35:09 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.839 20:35:09 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:56.839 20:35:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:56.839 20:35:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.QOU 00:25:56.839 20:35:09 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:56.839 20:35:09 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.839 20:35:09 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:56.839 20:35:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.twe ]] 00:25:56.839 20:35:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.twe 00:25:56.839 20:35:09 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:56.839 20:35:09 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.839 20:35:09 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:56.839 20:35:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:56.839 20:35:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.Wrq 00:25:56.839 20:35:09 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:56.839 20:35:09 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.839 20:35:09 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:56.839 20:35:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.pap ]] 00:25:56.839 20:35:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.pap 00:25:56.839 20:35:09 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:56.839 20:35:09 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.839 20:35:09 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:56.839 20:35:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:56.839 20:35:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.1ZP 00:25:56.839 20:35:09 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:56.839 20:35:09 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.839 20:35:09 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:56.839 20:35:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:25:56.839 20:35:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:25:56.839 20:35:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:25:56.839 20:35:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:56.839 20:35:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:56.839 20:35:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:56.839 20:35:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:56.839 20:35:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:56.839 20:35:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:25:56.839 20:35:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:25:56.839 20:35:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:25:56.839 20:35:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:25:56.839 20:35:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:25:56.839 20:35:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 192.168.100.8 00:25:56.839 20:35:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=192.168.100.8 00:25:56.839 20:35:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:25:56.839 20:35:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:56.839 20:35:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:25:56.839 20:35:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:25:56.839 20:35:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:25:56.839 20:35:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:25:56.839 20:35:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:25:56.839 20:35:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:25:56.839 20:35:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:26:00.117 Waiting for block devices as requested 00:26:00.117 0000:5f:00.0 (8086 0a54): vfio-pci -> nvme 00:26:00.117 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:26:00.117 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:26:00.117 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:26:00.117 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:26:00.117 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:26:00.117 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:26:00.374 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:26:00.374 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:26:00.374 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:26:00.374 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:26:00.631 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:26:00.631 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:26:00.631 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:26:00.887 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:26:00.887 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:26:00.887 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:26:01.451 20:35:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:26:01.451 20:35:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:26:01.451 20:35:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:26:01.451 20:35:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:26:01.451 20:35:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:26:01.451 20:35:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:26:01.451 20:35:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:26:01.451 20:35:14 nvmf_rdma.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:26:01.451 20:35:14 nvmf_rdma.nvmf_auth_host -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:26:01.709 No valid GPT data, bailing 00:26:01.709 20:35:14 nvmf_rdma.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:26:01.709 20:35:14 nvmf_rdma.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:26:01.709 20:35:14 nvmf_rdma.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:26:01.709 20:35:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:26:01.709 20:35:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:26:01.709 20:35:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:01.709 20:35:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:26:01.709 20:35:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:26:01.709 20:35:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:26:01.709 20:35:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:26:01.709 20:35:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:26:01.709 20:35:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:26:01.709 20:35:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 192.168.100.8 00:26:01.709 20:35:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@672 -- # echo rdma 00:26:01.709 20:35:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:26:01.709 20:35:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:26:01.709 20:35:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:26:01.709 20:35:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -a 192.168.100.8 -t rdma -s 4420 00:26:01.709 00:26:01.709 Discovery Log Number of Records 2, Generation counter 2 00:26:01.709 =====Discovery Log Entry 0====== 00:26:01.709 trtype: rdma 00:26:01.709 adrfam: ipv4 00:26:01.709 subtype: current discovery subsystem 00:26:01.709 treq: not specified, sq flow control disable supported 00:26:01.709 portid: 1 00:26:01.709 trsvcid: 4420 00:26:01.709 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:26:01.709 traddr: 192.168.100.8 00:26:01.709 eflags: none 00:26:01.709 rdma_prtype: not specified 00:26:01.709 rdma_qptype: connected 00:26:01.709 rdma_cms: rdma-cm 00:26:01.709 rdma_pkey: 0x0000 00:26:01.710 =====Discovery Log Entry 1====== 00:26:01.710 trtype: rdma 00:26:01.710 adrfam: ipv4 00:26:01.710 subtype: nvme subsystem 00:26:01.710 treq: not specified, sq flow control disable supported 00:26:01.710 portid: 1 00:26:01.710 trsvcid: 4420 00:26:01.710 subnqn: nqn.2024-02.io.spdk:cnode0 00:26:01.710 traddr: 192.168.100.8 00:26:01.710 eflags: none 00:26:01.710 rdma_prtype: not specified 00:26:01.710 rdma_qptype: connected 00:26:01.710 rdma_cms: rdma-cm 00:26:01.710 rdma_pkey: 0x0000 00:26:01.710 20:35:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:26:01.710 20:35:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:26:01.710 20:35:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:26:01.710 20:35:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:26:01.710 20:35:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:01.710 20:35:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:01.710 20:35:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:01.710 20:35:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:01.710 20:35:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTA5Nzc2MjY5OTRjNTk5MWRiNjNlYjk2N2QwYWJhNTQwYTg5N2FiMGUyYzYyNmUxSCxk9Q==: 00:26:01.710 20:35:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWQ3MGQ5Yjk3Y2E4NzUzM2ExNjljNGQ0MzBmNDU4YzRkODMyYzE1NzEzYmEwNGZmapuQ+w==: 00:26:01.710 20:35:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:01.710 20:35:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:01.710 20:35:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTA5Nzc2MjY5OTRjNTk5MWRiNjNlYjk2N2QwYWJhNTQwYTg5N2FiMGUyYzYyNmUxSCxk9Q==: 00:26:01.710 20:35:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWQ3MGQ5Yjk3Y2E4NzUzM2ExNjljNGQ0MzBmNDU4YzRkODMyYzE1NzEzYmEwNGZmapuQ+w==: ]] 00:26:01.710 20:35:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWQ3MGQ5Yjk3Y2E4NzUzM2ExNjljNGQ0MzBmNDU4YzRkODMyYzE1NzEzYmEwNGZmapuQ+w==: 00:26:01.710 20:35:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:26:01.710 20:35:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:26:01.710 20:35:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:26:01.710 20:35:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:26:01.710 20:35:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:26:01.710 20:35:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:01.710 20:35:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:26:01.710 20:35:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:26:01.710 20:35:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:01.710 20:35:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:01.710 20:35:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:26:01.710 20:35:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:01.710 20:35:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.710 20:35:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:01.710 20:35:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:01.710 20:35:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:01.710 20:35:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:01.710 20:35:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:01.710 20:35:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:01.710 20:35:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:01.710 20:35:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:01.710 20:35:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:01.710 20:35:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:01.710 20:35:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:01.710 20:35:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:01.710 20:35:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:01.710 20:35:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:01.710 20:35:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.968 nvme0n1 00:26:01.968 20:35:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:01.968 20:35:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:01.968 20:35:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:01.968 20:35:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:01.968 20:35:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.968 20:35:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:01.968 20:35:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:01.968 20:35:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:01.968 20:35:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:01.968 20:35:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.968 20:35:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:01.968 20:35:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:26:01.968 20:35:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:01.968 20:35:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:01.968 20:35:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:26:01.968 20:35:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:01.968 20:35:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:01.968 20:35:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:01.968 20:35:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:01.968 20:35:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGQzOTQwNmUyZDFkNzIxYTRhOTJiYTNkOGIxYjM1NGWd8XXT: 00:26:01.968 20:35:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWNiMTdlM2ZmNzg4ZTI3MGNlNTJjZjk5NDg2Njg3ZDY2ZmZjYTIxMzk5ZWYyYjBjYjY1NmY5MjNmZDBiZmYyNI/Fg9o=: 00:26:01.968 20:35:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:01.969 20:35:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:01.969 20:35:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGQzOTQwNmUyZDFkNzIxYTRhOTJiYTNkOGIxYjM1NGWd8XXT: 00:26:01.969 20:35:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWNiMTdlM2ZmNzg4ZTI3MGNlNTJjZjk5NDg2Njg3ZDY2ZmZjYTIxMzk5ZWYyYjBjYjY1NmY5MjNmZDBiZmYyNI/Fg9o=: ]] 00:26:01.969 20:35:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWNiMTdlM2ZmNzg4ZTI3MGNlNTJjZjk5NDg2Njg3ZDY2ZmZjYTIxMzk5ZWYyYjBjYjY1NmY5MjNmZDBiZmYyNI/Fg9o=: 00:26:01.969 20:35:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:26:01.969 20:35:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:01.969 20:35:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:01.969 20:35:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:01.969 20:35:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:01.969 20:35:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:01.969 20:35:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:01.969 20:35:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:01.969 20:35:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.969 20:35:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:02.226 20:35:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:02.226 20:35:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:02.226 20:35:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:02.226 20:35:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:02.226 20:35:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:02.226 20:35:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:02.226 20:35:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:02.226 20:35:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:02.226 20:35:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:02.226 20:35:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:02.226 20:35:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:02.226 20:35:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:02.226 20:35:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:02.226 20:35:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.226 nvme0n1 00:26:02.226 20:35:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:02.226 20:35:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:02.226 20:35:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:02.226 20:35:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:02.226 20:35:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.226 20:35:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:02.226 20:35:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:02.226 20:35:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:02.226 20:35:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:02.226 20:35:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.484 20:35:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:02.484 20:35:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:02.484 20:35:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:26:02.484 20:35:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:02.484 20:35:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:02.484 20:35:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:02.484 20:35:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:02.484 20:35:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTA5Nzc2MjY5OTRjNTk5MWRiNjNlYjk2N2QwYWJhNTQwYTg5N2FiMGUyYzYyNmUxSCxk9Q==: 00:26:02.484 20:35:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWQ3MGQ5Yjk3Y2E4NzUzM2ExNjljNGQ0MzBmNDU4YzRkODMyYzE1NzEzYmEwNGZmapuQ+w==: 00:26:02.484 20:35:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:02.484 20:35:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:02.484 20:35:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTA5Nzc2MjY5OTRjNTk5MWRiNjNlYjk2N2QwYWJhNTQwYTg5N2FiMGUyYzYyNmUxSCxk9Q==: 00:26:02.484 20:35:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWQ3MGQ5Yjk3Y2E4NzUzM2ExNjljNGQ0MzBmNDU4YzRkODMyYzE1NzEzYmEwNGZmapuQ+w==: ]] 00:26:02.484 20:35:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWQ3MGQ5Yjk3Y2E4NzUzM2ExNjljNGQ0MzBmNDU4YzRkODMyYzE1NzEzYmEwNGZmapuQ+w==: 00:26:02.484 20:35:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:26:02.484 20:35:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:02.484 20:35:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:02.484 20:35:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:02.484 20:35:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:02.484 20:35:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:02.484 20:35:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:02.484 20:35:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:02.484 20:35:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.484 20:35:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:02.484 20:35:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:02.484 20:35:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:02.484 20:35:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:02.484 20:35:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:02.484 20:35:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:02.484 20:35:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:02.484 20:35:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:02.484 20:35:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:02.484 20:35:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:02.484 20:35:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:02.484 20:35:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:02.485 20:35:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:02.485 20:35:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:02.485 20:35:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.485 nvme0n1 00:26:02.485 20:35:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:02.485 20:35:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:02.485 20:35:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:02.485 20:35:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:02.485 20:35:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.485 20:35:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:02.743 20:35:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:02.743 20:35:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:02.743 20:35:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:02.743 20:35:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.743 20:35:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:02.743 20:35:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:02.743 20:35:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:26:02.743 20:35:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:02.743 20:35:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:02.743 20:35:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:02.743 20:35:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:02.743 20:35:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzYxZjFlY2IzZGE5OWIwNWMyNTk0MWQ3MzYzMTI2YmQRBV+K: 00:26:02.743 20:35:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OWFmZGZlMmY3YTgzNjA2MWNlOGEzODhmMTcwMWZlNWTDh5qi: 00:26:02.743 20:35:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:02.743 20:35:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:02.743 20:35:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzYxZjFlY2IzZGE5OWIwNWMyNTk0MWQ3MzYzMTI2YmQRBV+K: 00:26:02.743 20:35:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OWFmZGZlMmY3YTgzNjA2MWNlOGEzODhmMTcwMWZlNWTDh5qi: ]] 00:26:02.743 20:35:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OWFmZGZlMmY3YTgzNjA2MWNlOGEzODhmMTcwMWZlNWTDh5qi: 00:26:02.743 20:35:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:26:02.743 20:35:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:02.743 20:35:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:02.743 20:35:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:02.743 20:35:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:02.743 20:35:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:02.743 20:35:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:02.743 20:35:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:02.743 20:35:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.743 20:35:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:02.743 20:35:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:02.743 20:35:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:02.743 20:35:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:02.743 20:35:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:02.743 20:35:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:02.743 20:35:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:02.743 20:35:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:02.743 20:35:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:02.743 20:35:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:02.743 20:35:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:02.743 20:35:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:02.743 20:35:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:02.743 20:35:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:02.743 20:35:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.001 nvme0n1 00:26:03.001 20:35:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:03.001 20:35:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:03.001 20:35:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:03.001 20:35:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:03.001 20:35:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.001 20:35:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:03.001 20:35:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:03.001 20:35:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:03.001 20:35:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:03.001 20:35:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.001 20:35:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:03.001 20:35:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:03.001 20:35:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:26:03.001 20:35:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:03.001 20:35:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:03.001 20:35:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:03.001 20:35:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:03.001 20:35:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Y2I5NWQ0ODg4NjJjMDczYWY2MGU3MzcxYmIyNWEyMzEyMzIxNjlmYmNkNDM2ZjFhnNIknw==: 00:26:03.001 20:35:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGU2NzMwMTEzMjlkODgxNTFlNDM3MjQxZTQ3ZTY4MmZGEVRs: 00:26:03.001 20:35:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:03.001 20:35:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:03.001 20:35:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Y2I5NWQ0ODg4NjJjMDczYWY2MGU3MzcxYmIyNWEyMzEyMzIxNjlmYmNkNDM2ZjFhnNIknw==: 00:26:03.001 20:35:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGU2NzMwMTEzMjlkODgxNTFlNDM3MjQxZTQ3ZTY4MmZGEVRs: ]] 00:26:03.001 20:35:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGU2NzMwMTEzMjlkODgxNTFlNDM3MjQxZTQ3ZTY4MmZGEVRs: 00:26:03.001 20:35:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:26:03.001 20:35:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:03.001 20:35:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:03.001 20:35:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:03.001 20:35:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:03.001 20:35:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:03.001 20:35:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:03.001 20:35:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:03.001 20:35:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.001 20:35:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:03.001 20:35:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:03.001 20:35:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:03.001 20:35:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:03.001 20:35:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:03.001 20:35:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:03.001 20:35:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:03.001 20:35:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:03.001 20:35:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:03.001 20:35:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:03.001 20:35:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:03.001 20:35:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:03.001 20:35:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:03.001 20:35:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:03.001 20:35:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.259 nvme0n1 00:26:03.259 20:35:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:03.259 20:35:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:03.259 20:35:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:03.259 20:35:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:03.259 20:35:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.259 20:35:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:03.259 20:35:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:03.259 20:35:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:03.259 20:35:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:03.259 20:35:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.259 20:35:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:03.259 20:35:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:03.259 20:35:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:26:03.259 20:35:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:03.259 20:35:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:03.259 20:35:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:03.259 20:35:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:03.259 20:35:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODQ1MGE5OTQ1OWE4NDE0NWU1YTc1ZmI5NzJmMWU0NWMwOTdmNmU4ZmQwNDdiMDEyYjc4NzIwOWY4OGI1ZDM1OD/T/uM=: 00:26:03.259 20:35:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:03.259 20:35:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:03.259 20:35:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:03.259 20:35:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODQ1MGE5OTQ1OWE4NDE0NWU1YTc1ZmI5NzJmMWU0NWMwOTdmNmU4ZmQwNDdiMDEyYjc4NzIwOWY4OGI1ZDM1OD/T/uM=: 00:26:03.259 20:35:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:03.259 20:35:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:26:03.259 20:35:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:03.259 20:35:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:03.259 20:35:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:03.259 20:35:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:03.259 20:35:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:03.259 20:35:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:03.259 20:35:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:03.259 20:35:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.259 20:35:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:03.259 20:35:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:03.259 20:35:16 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:03.259 20:35:16 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:03.259 20:35:16 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:03.259 20:35:16 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:03.259 20:35:16 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:03.259 20:35:16 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:03.259 20:35:16 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:03.259 20:35:16 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:03.259 20:35:16 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:03.259 20:35:16 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:03.259 20:35:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:03.259 20:35:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:03.259 20:35:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.517 nvme0n1 00:26:03.517 20:35:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:03.517 20:35:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:03.517 20:35:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:03.517 20:35:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:03.517 20:35:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.517 20:35:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:03.517 20:35:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:03.517 20:35:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:03.517 20:35:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:03.517 20:35:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.517 20:35:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:03.517 20:35:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:03.517 20:35:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:03.517 20:35:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:26:03.517 20:35:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:03.517 20:35:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:03.517 20:35:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:03.517 20:35:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:03.517 20:35:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGQzOTQwNmUyZDFkNzIxYTRhOTJiYTNkOGIxYjM1NGWd8XXT: 00:26:03.517 20:35:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWNiMTdlM2ZmNzg4ZTI3MGNlNTJjZjk5NDg2Njg3ZDY2ZmZjYTIxMzk5ZWYyYjBjYjY1NmY5MjNmZDBiZmYyNI/Fg9o=: 00:26:03.517 20:35:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:03.517 20:35:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:03.517 20:35:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGQzOTQwNmUyZDFkNzIxYTRhOTJiYTNkOGIxYjM1NGWd8XXT: 00:26:03.517 20:35:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWNiMTdlM2ZmNzg4ZTI3MGNlNTJjZjk5NDg2Njg3ZDY2ZmZjYTIxMzk5ZWYyYjBjYjY1NmY5MjNmZDBiZmYyNI/Fg9o=: ]] 00:26:03.517 20:35:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWNiMTdlM2ZmNzg4ZTI3MGNlNTJjZjk5NDg2Njg3ZDY2ZmZjYTIxMzk5ZWYyYjBjYjY1NmY5MjNmZDBiZmYyNI/Fg9o=: 00:26:03.517 20:35:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:26:03.517 20:35:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:03.517 20:35:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:03.517 20:35:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:03.517 20:35:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:03.517 20:35:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:03.517 20:35:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:03.517 20:35:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:03.517 20:35:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.517 20:35:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:03.517 20:35:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:03.517 20:35:16 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:03.517 20:35:16 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:03.517 20:35:16 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:03.517 20:35:16 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:03.517 20:35:16 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:03.517 20:35:16 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:03.517 20:35:16 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:03.517 20:35:16 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:03.517 20:35:16 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:03.517 20:35:16 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:03.517 20:35:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:03.517 20:35:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:03.517 20:35:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.775 nvme0n1 00:26:03.775 20:35:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:03.775 20:35:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:03.775 20:35:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:03.775 20:35:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:03.775 20:35:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.775 20:35:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:03.775 20:35:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:03.775 20:35:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:03.775 20:35:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:03.775 20:35:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.775 20:35:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:03.775 20:35:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:03.775 20:35:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:26:03.775 20:35:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:03.775 20:35:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:03.775 20:35:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:03.775 20:35:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:03.775 20:35:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTA5Nzc2MjY5OTRjNTk5MWRiNjNlYjk2N2QwYWJhNTQwYTg5N2FiMGUyYzYyNmUxSCxk9Q==: 00:26:03.775 20:35:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWQ3MGQ5Yjk3Y2E4NzUzM2ExNjljNGQ0MzBmNDU4YzRkODMyYzE1NzEzYmEwNGZmapuQ+w==: 00:26:03.775 20:35:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:03.775 20:35:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:03.775 20:35:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTA5Nzc2MjY5OTRjNTk5MWRiNjNlYjk2N2QwYWJhNTQwYTg5N2FiMGUyYzYyNmUxSCxk9Q==: 00:26:03.775 20:35:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWQ3MGQ5Yjk3Y2E4NzUzM2ExNjljNGQ0MzBmNDU4YzRkODMyYzE1NzEzYmEwNGZmapuQ+w==: ]] 00:26:03.775 20:35:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWQ3MGQ5Yjk3Y2E4NzUzM2ExNjljNGQ0MzBmNDU4YzRkODMyYzE1NzEzYmEwNGZmapuQ+w==: 00:26:03.775 20:35:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:26:03.775 20:35:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:03.775 20:35:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:03.775 20:35:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:03.775 20:35:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:03.775 20:35:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:03.775 20:35:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:03.775 20:35:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:03.775 20:35:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.775 20:35:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:03.775 20:35:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:03.775 20:35:16 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:03.775 20:35:16 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:03.775 20:35:16 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:03.775 20:35:16 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:03.775 20:35:16 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:03.775 20:35:16 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:03.775 20:35:16 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:03.775 20:35:16 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:03.775 20:35:16 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:03.775 20:35:16 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:03.775 20:35:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:03.775 20:35:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:03.775 20:35:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.032 nvme0n1 00:26:04.032 20:35:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:04.032 20:35:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:04.032 20:35:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:04.032 20:35:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:04.032 20:35:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.032 20:35:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:04.032 20:35:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:04.033 20:35:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:04.033 20:35:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:04.033 20:35:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.033 20:35:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:04.033 20:35:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:04.033 20:35:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:26:04.033 20:35:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:04.033 20:35:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:04.033 20:35:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:04.033 20:35:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:04.033 20:35:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzYxZjFlY2IzZGE5OWIwNWMyNTk0MWQ3MzYzMTI2YmQRBV+K: 00:26:04.033 20:35:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OWFmZGZlMmY3YTgzNjA2MWNlOGEzODhmMTcwMWZlNWTDh5qi: 00:26:04.033 20:35:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:04.033 20:35:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:04.033 20:35:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzYxZjFlY2IzZGE5OWIwNWMyNTk0MWQ3MzYzMTI2YmQRBV+K: 00:26:04.033 20:35:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OWFmZGZlMmY3YTgzNjA2MWNlOGEzODhmMTcwMWZlNWTDh5qi: ]] 00:26:04.033 20:35:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OWFmZGZlMmY3YTgzNjA2MWNlOGEzODhmMTcwMWZlNWTDh5qi: 00:26:04.033 20:35:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:26:04.033 20:35:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:04.033 20:35:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:04.033 20:35:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:04.033 20:35:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:04.033 20:35:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:04.033 20:35:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:04.033 20:35:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:04.033 20:35:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.033 20:35:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:04.033 20:35:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:04.033 20:35:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:04.033 20:35:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:04.033 20:35:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:04.033 20:35:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:04.033 20:35:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:04.033 20:35:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:04.033 20:35:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:04.033 20:35:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:04.033 20:35:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:04.033 20:35:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:04.033 20:35:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:04.033 20:35:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:04.033 20:35:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.290 nvme0n1 00:26:04.290 20:35:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:04.290 20:35:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:04.290 20:35:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:04.290 20:35:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:04.290 20:35:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.290 20:35:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:04.547 20:35:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:04.547 20:35:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:04.547 20:35:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:04.547 20:35:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.547 20:35:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:04.547 20:35:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:04.547 20:35:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:26:04.547 20:35:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:04.547 20:35:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:04.547 20:35:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:04.547 20:35:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:04.547 20:35:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Y2I5NWQ0ODg4NjJjMDczYWY2MGU3MzcxYmIyNWEyMzEyMzIxNjlmYmNkNDM2ZjFhnNIknw==: 00:26:04.547 20:35:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGU2NzMwMTEzMjlkODgxNTFlNDM3MjQxZTQ3ZTY4MmZGEVRs: 00:26:04.547 20:35:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:04.547 20:35:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:04.547 20:35:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Y2I5NWQ0ODg4NjJjMDczYWY2MGU3MzcxYmIyNWEyMzEyMzIxNjlmYmNkNDM2ZjFhnNIknw==: 00:26:04.547 20:35:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGU2NzMwMTEzMjlkODgxNTFlNDM3MjQxZTQ3ZTY4MmZGEVRs: ]] 00:26:04.547 20:35:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGU2NzMwMTEzMjlkODgxNTFlNDM3MjQxZTQ3ZTY4MmZGEVRs: 00:26:04.547 20:35:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:26:04.547 20:35:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:04.547 20:35:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:04.547 20:35:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:04.547 20:35:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:04.547 20:35:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:04.547 20:35:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:04.547 20:35:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:04.547 20:35:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.547 20:35:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:04.547 20:35:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:04.547 20:35:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:04.548 20:35:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:04.548 20:35:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:04.548 20:35:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:04.548 20:35:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:04.548 20:35:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:04.548 20:35:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:04.548 20:35:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:04.548 20:35:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:04.548 20:35:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:04.548 20:35:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:04.548 20:35:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:04.548 20:35:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.805 nvme0n1 00:26:04.805 20:35:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:04.805 20:35:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:04.805 20:35:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:04.805 20:35:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:04.805 20:35:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.805 20:35:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:04.805 20:35:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:04.805 20:35:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:04.805 20:35:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:04.805 20:35:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.805 20:35:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:04.805 20:35:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:04.805 20:35:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:26:04.805 20:35:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:04.805 20:35:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:04.805 20:35:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:04.805 20:35:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:04.805 20:35:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODQ1MGE5OTQ1OWE4NDE0NWU1YTc1ZmI5NzJmMWU0NWMwOTdmNmU4ZmQwNDdiMDEyYjc4NzIwOWY4OGI1ZDM1OD/T/uM=: 00:26:04.805 20:35:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:04.805 20:35:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:04.805 20:35:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:04.805 20:35:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODQ1MGE5OTQ1OWE4NDE0NWU1YTc1ZmI5NzJmMWU0NWMwOTdmNmU4ZmQwNDdiMDEyYjc4NzIwOWY4OGI1ZDM1OD/T/uM=: 00:26:04.805 20:35:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:04.805 20:35:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:26:04.805 20:35:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:04.805 20:35:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:04.805 20:35:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:04.805 20:35:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:04.805 20:35:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:04.805 20:35:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:04.805 20:35:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:04.805 20:35:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.805 20:35:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:04.805 20:35:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:04.805 20:35:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:04.805 20:35:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:04.805 20:35:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:04.805 20:35:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:04.805 20:35:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:04.805 20:35:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:04.805 20:35:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:04.805 20:35:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:04.806 20:35:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:04.806 20:35:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:04.806 20:35:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:04.806 20:35:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:04.806 20:35:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.063 nvme0n1 00:26:05.063 20:35:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:05.063 20:35:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:05.063 20:35:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:05.063 20:35:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:05.063 20:35:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.063 20:35:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:05.063 20:35:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:05.063 20:35:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:05.063 20:35:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:05.063 20:35:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.063 20:35:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:05.063 20:35:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:05.063 20:35:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:05.063 20:35:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:26:05.063 20:35:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:05.063 20:35:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:05.063 20:35:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:05.063 20:35:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:05.063 20:35:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGQzOTQwNmUyZDFkNzIxYTRhOTJiYTNkOGIxYjM1NGWd8XXT: 00:26:05.063 20:35:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWNiMTdlM2ZmNzg4ZTI3MGNlNTJjZjk5NDg2Njg3ZDY2ZmZjYTIxMzk5ZWYyYjBjYjY1NmY5MjNmZDBiZmYyNI/Fg9o=: 00:26:05.063 20:35:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:05.063 20:35:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:05.063 20:35:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGQzOTQwNmUyZDFkNzIxYTRhOTJiYTNkOGIxYjM1NGWd8XXT: 00:26:05.063 20:35:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWNiMTdlM2ZmNzg4ZTI3MGNlNTJjZjk5NDg2Njg3ZDY2ZmZjYTIxMzk5ZWYyYjBjYjY1NmY5MjNmZDBiZmYyNI/Fg9o=: ]] 00:26:05.063 20:35:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWNiMTdlM2ZmNzg4ZTI3MGNlNTJjZjk5NDg2Njg3ZDY2ZmZjYTIxMzk5ZWYyYjBjYjY1NmY5MjNmZDBiZmYyNI/Fg9o=: 00:26:05.063 20:35:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:26:05.063 20:35:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:05.063 20:35:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:05.063 20:35:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:05.063 20:35:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:05.063 20:35:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:05.063 20:35:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:05.063 20:35:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:05.063 20:35:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.063 20:35:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:05.063 20:35:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:05.063 20:35:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:05.063 20:35:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:05.063 20:35:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:05.063 20:35:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:05.063 20:35:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:05.063 20:35:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:05.063 20:35:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:05.063 20:35:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:05.063 20:35:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:05.063 20:35:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:05.063 20:35:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:05.063 20:35:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:05.063 20:35:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.320 nvme0n1 00:26:05.320 20:35:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:05.320 20:35:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:05.320 20:35:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:05.320 20:35:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:05.320 20:35:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.320 20:35:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:05.577 20:35:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:05.578 20:35:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:05.578 20:35:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:05.578 20:35:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.578 20:35:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:05.578 20:35:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:05.578 20:35:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:26:05.578 20:35:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:05.578 20:35:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:05.578 20:35:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:05.578 20:35:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:05.578 20:35:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTA5Nzc2MjY5OTRjNTk5MWRiNjNlYjk2N2QwYWJhNTQwYTg5N2FiMGUyYzYyNmUxSCxk9Q==: 00:26:05.578 20:35:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWQ3MGQ5Yjk3Y2E4NzUzM2ExNjljNGQ0MzBmNDU4YzRkODMyYzE1NzEzYmEwNGZmapuQ+w==: 00:26:05.578 20:35:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:05.578 20:35:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:05.578 20:35:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTA5Nzc2MjY5OTRjNTk5MWRiNjNlYjk2N2QwYWJhNTQwYTg5N2FiMGUyYzYyNmUxSCxk9Q==: 00:26:05.578 20:35:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWQ3MGQ5Yjk3Y2E4NzUzM2ExNjljNGQ0MzBmNDU4YzRkODMyYzE1NzEzYmEwNGZmapuQ+w==: ]] 00:26:05.578 20:35:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWQ3MGQ5Yjk3Y2E4NzUzM2ExNjljNGQ0MzBmNDU4YzRkODMyYzE1NzEzYmEwNGZmapuQ+w==: 00:26:05.578 20:35:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:26:05.578 20:35:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:05.578 20:35:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:05.578 20:35:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:05.578 20:35:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:05.578 20:35:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:05.578 20:35:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:05.578 20:35:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:05.578 20:35:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.578 20:35:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:05.578 20:35:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:05.578 20:35:18 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:05.578 20:35:18 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:05.578 20:35:18 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:05.578 20:35:18 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:05.578 20:35:18 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:05.578 20:35:18 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:05.578 20:35:18 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:05.578 20:35:18 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:05.578 20:35:18 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:05.578 20:35:18 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:05.578 20:35:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:05.578 20:35:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:05.578 20:35:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.835 nvme0n1 00:26:05.835 20:35:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:05.835 20:35:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:05.835 20:35:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:05.835 20:35:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:05.835 20:35:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.835 20:35:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:05.835 20:35:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:05.835 20:35:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:05.835 20:35:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:05.835 20:35:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.835 20:35:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:05.835 20:35:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:05.835 20:35:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:26:05.835 20:35:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:05.835 20:35:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:05.835 20:35:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:05.835 20:35:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:05.835 20:35:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzYxZjFlY2IzZGE5OWIwNWMyNTk0MWQ3MzYzMTI2YmQRBV+K: 00:26:05.835 20:35:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OWFmZGZlMmY3YTgzNjA2MWNlOGEzODhmMTcwMWZlNWTDh5qi: 00:26:05.835 20:35:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:05.835 20:35:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:05.835 20:35:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzYxZjFlY2IzZGE5OWIwNWMyNTk0MWQ3MzYzMTI2YmQRBV+K: 00:26:05.835 20:35:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OWFmZGZlMmY3YTgzNjA2MWNlOGEzODhmMTcwMWZlNWTDh5qi: ]] 00:26:05.835 20:35:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OWFmZGZlMmY3YTgzNjA2MWNlOGEzODhmMTcwMWZlNWTDh5qi: 00:26:05.835 20:35:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:26:05.835 20:35:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:05.835 20:35:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:05.835 20:35:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:05.835 20:35:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:05.835 20:35:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:05.835 20:35:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:05.835 20:35:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:05.835 20:35:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.835 20:35:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:05.835 20:35:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:05.835 20:35:18 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:05.835 20:35:18 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:05.835 20:35:18 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:05.835 20:35:18 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:05.835 20:35:18 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:05.835 20:35:18 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:05.835 20:35:18 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:05.835 20:35:18 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:05.835 20:35:18 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:05.835 20:35:18 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:05.835 20:35:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:05.835 20:35:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:05.835 20:35:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.092 nvme0n1 00:26:06.092 20:35:19 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:06.092 20:35:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:06.092 20:35:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:06.092 20:35:19 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:06.092 20:35:19 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.092 20:35:19 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:06.350 20:35:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:06.350 20:35:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:06.350 20:35:19 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:06.350 20:35:19 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.350 20:35:19 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:06.350 20:35:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:06.350 20:35:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:26:06.350 20:35:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:06.350 20:35:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:06.350 20:35:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:06.350 20:35:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:06.350 20:35:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Y2I5NWQ0ODg4NjJjMDczYWY2MGU3MzcxYmIyNWEyMzEyMzIxNjlmYmNkNDM2ZjFhnNIknw==: 00:26:06.350 20:35:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGU2NzMwMTEzMjlkODgxNTFlNDM3MjQxZTQ3ZTY4MmZGEVRs: 00:26:06.350 20:35:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:06.350 20:35:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:06.350 20:35:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Y2I5NWQ0ODg4NjJjMDczYWY2MGU3MzcxYmIyNWEyMzEyMzIxNjlmYmNkNDM2ZjFhnNIknw==: 00:26:06.350 20:35:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGU2NzMwMTEzMjlkODgxNTFlNDM3MjQxZTQ3ZTY4MmZGEVRs: ]] 00:26:06.350 20:35:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGU2NzMwMTEzMjlkODgxNTFlNDM3MjQxZTQ3ZTY4MmZGEVRs: 00:26:06.350 20:35:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:26:06.350 20:35:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:06.350 20:35:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:06.350 20:35:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:06.350 20:35:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:06.350 20:35:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:06.350 20:35:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:06.350 20:35:19 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:06.350 20:35:19 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.350 20:35:19 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:06.350 20:35:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:06.350 20:35:19 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:06.350 20:35:19 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:06.350 20:35:19 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:06.350 20:35:19 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:06.350 20:35:19 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:06.350 20:35:19 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:06.350 20:35:19 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:06.350 20:35:19 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:06.350 20:35:19 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:06.350 20:35:19 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:06.350 20:35:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:06.350 20:35:19 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:06.350 20:35:19 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.633 nvme0n1 00:26:06.633 20:35:19 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:06.633 20:35:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:06.633 20:35:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:06.633 20:35:19 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:06.633 20:35:19 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.633 20:35:19 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:06.633 20:35:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:06.633 20:35:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:06.633 20:35:19 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:06.633 20:35:19 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.633 20:35:19 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:06.633 20:35:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:06.633 20:35:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:26:06.633 20:35:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:06.633 20:35:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:06.633 20:35:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:06.633 20:35:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:06.633 20:35:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODQ1MGE5OTQ1OWE4NDE0NWU1YTc1ZmI5NzJmMWU0NWMwOTdmNmU4ZmQwNDdiMDEyYjc4NzIwOWY4OGI1ZDM1OD/T/uM=: 00:26:06.633 20:35:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:06.633 20:35:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:06.633 20:35:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:06.633 20:35:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODQ1MGE5OTQ1OWE4NDE0NWU1YTc1ZmI5NzJmMWU0NWMwOTdmNmU4ZmQwNDdiMDEyYjc4NzIwOWY4OGI1ZDM1OD/T/uM=: 00:26:06.633 20:35:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:06.633 20:35:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:26:06.633 20:35:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:06.633 20:35:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:06.633 20:35:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:06.633 20:35:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:06.633 20:35:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:06.633 20:35:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:06.633 20:35:19 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:06.633 20:35:19 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.633 20:35:19 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:06.633 20:35:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:06.633 20:35:19 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:06.633 20:35:19 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:06.633 20:35:19 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:06.633 20:35:19 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:06.633 20:35:19 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:06.633 20:35:19 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:06.633 20:35:19 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:06.633 20:35:19 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:06.633 20:35:19 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:06.633 20:35:19 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:06.633 20:35:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:06.633 20:35:19 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:06.633 20:35:19 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.890 nvme0n1 00:26:06.890 20:35:19 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:06.890 20:35:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:06.890 20:35:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:06.890 20:35:19 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:06.890 20:35:19 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.890 20:35:19 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:07.148 20:35:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:07.148 20:35:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:07.148 20:35:19 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:07.148 20:35:19 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.148 20:35:19 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:07.148 20:35:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:07.148 20:35:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:07.148 20:35:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:26:07.148 20:35:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:07.148 20:35:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:07.148 20:35:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:07.148 20:35:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:07.148 20:35:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGQzOTQwNmUyZDFkNzIxYTRhOTJiYTNkOGIxYjM1NGWd8XXT: 00:26:07.148 20:35:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWNiMTdlM2ZmNzg4ZTI3MGNlNTJjZjk5NDg2Njg3ZDY2ZmZjYTIxMzk5ZWYyYjBjYjY1NmY5MjNmZDBiZmYyNI/Fg9o=: 00:26:07.148 20:35:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:07.148 20:35:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:07.148 20:35:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGQzOTQwNmUyZDFkNzIxYTRhOTJiYTNkOGIxYjM1NGWd8XXT: 00:26:07.148 20:35:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWNiMTdlM2ZmNzg4ZTI3MGNlNTJjZjk5NDg2Njg3ZDY2ZmZjYTIxMzk5ZWYyYjBjYjY1NmY5MjNmZDBiZmYyNI/Fg9o=: ]] 00:26:07.148 20:35:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWNiMTdlM2ZmNzg4ZTI3MGNlNTJjZjk5NDg2Njg3ZDY2ZmZjYTIxMzk5ZWYyYjBjYjY1NmY5MjNmZDBiZmYyNI/Fg9o=: 00:26:07.148 20:35:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:26:07.148 20:35:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:07.148 20:35:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:07.148 20:35:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:07.148 20:35:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:07.148 20:35:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:07.148 20:35:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:07.148 20:35:19 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:07.148 20:35:19 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.148 20:35:19 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:07.148 20:35:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:07.148 20:35:19 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:07.148 20:35:19 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:07.148 20:35:19 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:07.148 20:35:19 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:07.148 20:35:19 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:07.148 20:35:19 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:07.148 20:35:19 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:07.148 20:35:19 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:07.148 20:35:19 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:07.148 20:35:19 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:07.148 20:35:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:07.148 20:35:19 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:07.148 20:35:19 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.406 nvme0n1 00:26:07.406 20:35:20 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:07.406 20:35:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:07.406 20:35:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:07.664 20:35:20 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:07.664 20:35:20 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.664 20:35:20 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:07.664 20:35:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:07.664 20:35:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:07.664 20:35:20 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:07.664 20:35:20 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.664 20:35:20 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:07.664 20:35:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:07.664 20:35:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:26:07.664 20:35:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:07.664 20:35:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:07.664 20:35:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:07.664 20:35:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:07.664 20:35:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTA5Nzc2MjY5OTRjNTk5MWRiNjNlYjk2N2QwYWJhNTQwYTg5N2FiMGUyYzYyNmUxSCxk9Q==: 00:26:07.664 20:35:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWQ3MGQ5Yjk3Y2E4NzUzM2ExNjljNGQ0MzBmNDU4YzRkODMyYzE1NzEzYmEwNGZmapuQ+w==: 00:26:07.664 20:35:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:07.664 20:35:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:07.664 20:35:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTA5Nzc2MjY5OTRjNTk5MWRiNjNlYjk2N2QwYWJhNTQwYTg5N2FiMGUyYzYyNmUxSCxk9Q==: 00:26:07.664 20:35:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWQ3MGQ5Yjk3Y2E4NzUzM2ExNjljNGQ0MzBmNDU4YzRkODMyYzE1NzEzYmEwNGZmapuQ+w==: ]] 00:26:07.664 20:35:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWQ3MGQ5Yjk3Y2E4NzUzM2ExNjljNGQ0MzBmNDU4YzRkODMyYzE1NzEzYmEwNGZmapuQ+w==: 00:26:07.664 20:35:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:26:07.664 20:35:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:07.664 20:35:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:07.664 20:35:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:07.664 20:35:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:07.664 20:35:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:07.664 20:35:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:07.664 20:35:20 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:07.664 20:35:20 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.664 20:35:20 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:07.664 20:35:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:07.664 20:35:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:07.664 20:35:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:07.664 20:35:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:07.664 20:35:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:07.664 20:35:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:07.664 20:35:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:07.664 20:35:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:07.664 20:35:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:07.664 20:35:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:07.664 20:35:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:07.665 20:35:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:07.665 20:35:20 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:07.665 20:35:20 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.230 nvme0n1 00:26:08.230 20:35:20 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:08.230 20:35:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:08.230 20:35:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:08.230 20:35:20 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:08.230 20:35:20 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.230 20:35:20 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:08.230 20:35:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:08.230 20:35:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:08.230 20:35:20 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:08.230 20:35:20 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.230 20:35:20 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:08.230 20:35:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:08.230 20:35:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:26:08.230 20:35:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:08.230 20:35:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:08.230 20:35:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:08.230 20:35:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:08.230 20:35:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzYxZjFlY2IzZGE5OWIwNWMyNTk0MWQ3MzYzMTI2YmQRBV+K: 00:26:08.230 20:35:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OWFmZGZlMmY3YTgzNjA2MWNlOGEzODhmMTcwMWZlNWTDh5qi: 00:26:08.230 20:35:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:08.230 20:35:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:08.230 20:35:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzYxZjFlY2IzZGE5OWIwNWMyNTk0MWQ3MzYzMTI2YmQRBV+K: 00:26:08.230 20:35:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OWFmZGZlMmY3YTgzNjA2MWNlOGEzODhmMTcwMWZlNWTDh5qi: ]] 00:26:08.230 20:35:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OWFmZGZlMmY3YTgzNjA2MWNlOGEzODhmMTcwMWZlNWTDh5qi: 00:26:08.230 20:35:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:26:08.230 20:35:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:08.230 20:35:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:08.230 20:35:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:08.230 20:35:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:08.230 20:35:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:08.230 20:35:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:08.230 20:35:20 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:08.230 20:35:20 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.230 20:35:20 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:08.230 20:35:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:08.230 20:35:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:08.230 20:35:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:08.230 20:35:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:08.230 20:35:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:08.230 20:35:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:08.230 20:35:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:08.230 20:35:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:08.230 20:35:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:08.230 20:35:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:08.230 20:35:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:08.230 20:35:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:08.230 20:35:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:08.230 20:35:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.488 nvme0n1 00:26:08.488 20:35:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:08.488 20:35:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:08.488 20:35:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:08.488 20:35:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:08.488 20:35:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.488 20:35:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:08.488 20:35:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:08.488 20:35:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:08.488 20:35:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:08.488 20:35:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.745 20:35:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:08.745 20:35:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:08.745 20:35:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:26:08.745 20:35:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:08.745 20:35:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:08.745 20:35:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:08.745 20:35:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:08.745 20:35:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Y2I5NWQ0ODg4NjJjMDczYWY2MGU3MzcxYmIyNWEyMzEyMzIxNjlmYmNkNDM2ZjFhnNIknw==: 00:26:08.745 20:35:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGU2NzMwMTEzMjlkODgxNTFlNDM3MjQxZTQ3ZTY4MmZGEVRs: 00:26:08.745 20:35:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:08.745 20:35:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:08.745 20:35:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Y2I5NWQ0ODg4NjJjMDczYWY2MGU3MzcxYmIyNWEyMzEyMzIxNjlmYmNkNDM2ZjFhnNIknw==: 00:26:08.745 20:35:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGU2NzMwMTEzMjlkODgxNTFlNDM3MjQxZTQ3ZTY4MmZGEVRs: ]] 00:26:08.746 20:35:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGU2NzMwMTEzMjlkODgxNTFlNDM3MjQxZTQ3ZTY4MmZGEVRs: 00:26:08.746 20:35:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:26:08.746 20:35:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:08.746 20:35:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:08.746 20:35:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:08.746 20:35:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:08.746 20:35:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:08.746 20:35:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:08.746 20:35:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:08.746 20:35:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.746 20:35:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:08.746 20:35:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:08.746 20:35:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:08.746 20:35:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:08.746 20:35:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:08.746 20:35:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:08.746 20:35:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:08.746 20:35:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:08.746 20:35:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:08.746 20:35:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:08.746 20:35:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:08.746 20:35:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:08.746 20:35:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:08.746 20:35:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:08.746 20:35:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.003 nvme0n1 00:26:09.003 20:35:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:09.003 20:35:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:09.003 20:35:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:09.003 20:35:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:09.003 20:35:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.003 20:35:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:09.261 20:35:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:09.261 20:35:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:09.261 20:35:22 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:09.261 20:35:22 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.261 20:35:22 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:09.261 20:35:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:09.261 20:35:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:26:09.261 20:35:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:09.261 20:35:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:09.261 20:35:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:09.261 20:35:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:09.261 20:35:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODQ1MGE5OTQ1OWE4NDE0NWU1YTc1ZmI5NzJmMWU0NWMwOTdmNmU4ZmQwNDdiMDEyYjc4NzIwOWY4OGI1ZDM1OD/T/uM=: 00:26:09.261 20:35:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:09.261 20:35:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:09.261 20:35:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:09.261 20:35:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODQ1MGE5OTQ1OWE4NDE0NWU1YTc1ZmI5NzJmMWU0NWMwOTdmNmU4ZmQwNDdiMDEyYjc4NzIwOWY4OGI1ZDM1OD/T/uM=: 00:26:09.261 20:35:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:09.261 20:35:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:26:09.261 20:35:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:09.261 20:35:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:09.261 20:35:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:09.261 20:35:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:09.261 20:35:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:09.261 20:35:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:09.261 20:35:22 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:09.261 20:35:22 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.261 20:35:22 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:09.261 20:35:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:09.261 20:35:22 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:09.261 20:35:22 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:09.261 20:35:22 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:09.261 20:35:22 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:09.261 20:35:22 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:09.261 20:35:22 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:09.261 20:35:22 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:09.261 20:35:22 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:09.261 20:35:22 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:09.261 20:35:22 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:09.261 20:35:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:09.261 20:35:22 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:09.261 20:35:22 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.518 nvme0n1 00:26:09.518 20:35:22 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:09.519 20:35:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:09.519 20:35:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:09.519 20:35:22 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:09.519 20:35:22 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.519 20:35:22 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:09.519 20:35:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:09.519 20:35:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:09.519 20:35:22 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:09.519 20:35:22 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.776 20:35:22 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:09.776 20:35:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:09.776 20:35:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:09.776 20:35:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:26:09.776 20:35:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:09.776 20:35:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:09.776 20:35:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:09.776 20:35:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:09.776 20:35:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGQzOTQwNmUyZDFkNzIxYTRhOTJiYTNkOGIxYjM1NGWd8XXT: 00:26:09.776 20:35:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWNiMTdlM2ZmNzg4ZTI3MGNlNTJjZjk5NDg2Njg3ZDY2ZmZjYTIxMzk5ZWYyYjBjYjY1NmY5MjNmZDBiZmYyNI/Fg9o=: 00:26:09.776 20:35:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:09.776 20:35:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:09.776 20:35:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGQzOTQwNmUyZDFkNzIxYTRhOTJiYTNkOGIxYjM1NGWd8XXT: 00:26:09.776 20:35:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWNiMTdlM2ZmNzg4ZTI3MGNlNTJjZjk5NDg2Njg3ZDY2ZmZjYTIxMzk5ZWYyYjBjYjY1NmY5MjNmZDBiZmYyNI/Fg9o=: ]] 00:26:09.776 20:35:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWNiMTdlM2ZmNzg4ZTI3MGNlNTJjZjk5NDg2Njg3ZDY2ZmZjYTIxMzk5ZWYyYjBjYjY1NmY5MjNmZDBiZmYyNI/Fg9o=: 00:26:09.776 20:35:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:26:09.776 20:35:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:09.776 20:35:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:09.776 20:35:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:09.776 20:35:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:09.776 20:35:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:09.776 20:35:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:09.777 20:35:22 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:09.777 20:35:22 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.777 20:35:22 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:09.777 20:35:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:09.777 20:35:22 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:09.777 20:35:22 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:09.777 20:35:22 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:09.777 20:35:22 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:09.777 20:35:22 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:09.777 20:35:22 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:09.777 20:35:22 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:09.777 20:35:22 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:09.777 20:35:22 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:09.777 20:35:22 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:09.777 20:35:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:09.777 20:35:22 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:09.777 20:35:22 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.342 nvme0n1 00:26:10.342 20:35:23 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:10.342 20:35:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:10.342 20:35:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:10.342 20:35:23 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:10.342 20:35:23 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.342 20:35:23 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:10.342 20:35:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:10.342 20:35:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:10.342 20:35:23 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:10.342 20:35:23 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.342 20:35:23 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:10.342 20:35:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:10.342 20:35:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:26:10.342 20:35:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:10.342 20:35:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:10.342 20:35:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:10.342 20:35:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:10.342 20:35:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTA5Nzc2MjY5OTRjNTk5MWRiNjNlYjk2N2QwYWJhNTQwYTg5N2FiMGUyYzYyNmUxSCxk9Q==: 00:26:10.342 20:35:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWQ3MGQ5Yjk3Y2E4NzUzM2ExNjljNGQ0MzBmNDU4YzRkODMyYzE1NzEzYmEwNGZmapuQ+w==: 00:26:10.342 20:35:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:10.342 20:35:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:10.342 20:35:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTA5Nzc2MjY5OTRjNTk5MWRiNjNlYjk2N2QwYWJhNTQwYTg5N2FiMGUyYzYyNmUxSCxk9Q==: 00:26:10.342 20:35:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWQ3MGQ5Yjk3Y2E4NzUzM2ExNjljNGQ0MzBmNDU4YzRkODMyYzE1NzEzYmEwNGZmapuQ+w==: ]] 00:26:10.342 20:35:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWQ3MGQ5Yjk3Y2E4NzUzM2ExNjljNGQ0MzBmNDU4YzRkODMyYzE1NzEzYmEwNGZmapuQ+w==: 00:26:10.342 20:35:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:26:10.342 20:35:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:10.342 20:35:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:10.342 20:35:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:10.342 20:35:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:10.342 20:35:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:10.342 20:35:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:10.342 20:35:23 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:10.342 20:35:23 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.342 20:35:23 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:10.342 20:35:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:10.342 20:35:23 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:10.342 20:35:23 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:10.342 20:35:23 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:10.342 20:35:23 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:10.342 20:35:23 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:10.342 20:35:23 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:10.342 20:35:23 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:10.342 20:35:23 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:10.342 20:35:23 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:10.342 20:35:23 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:10.342 20:35:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:10.342 20:35:23 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:10.342 20:35:23 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.907 nvme0n1 00:26:10.907 20:35:23 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:10.907 20:35:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:10.907 20:35:23 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:10.907 20:35:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:10.907 20:35:23 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.907 20:35:23 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:10.907 20:35:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:10.907 20:35:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:10.907 20:35:23 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:10.907 20:35:23 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.165 20:35:23 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:11.165 20:35:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:11.165 20:35:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:26:11.165 20:35:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:11.165 20:35:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:11.165 20:35:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:11.165 20:35:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:11.165 20:35:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzYxZjFlY2IzZGE5OWIwNWMyNTk0MWQ3MzYzMTI2YmQRBV+K: 00:26:11.165 20:35:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OWFmZGZlMmY3YTgzNjA2MWNlOGEzODhmMTcwMWZlNWTDh5qi: 00:26:11.165 20:35:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:11.165 20:35:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:11.165 20:35:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzYxZjFlY2IzZGE5OWIwNWMyNTk0MWQ3MzYzMTI2YmQRBV+K: 00:26:11.165 20:35:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OWFmZGZlMmY3YTgzNjA2MWNlOGEzODhmMTcwMWZlNWTDh5qi: ]] 00:26:11.165 20:35:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OWFmZGZlMmY3YTgzNjA2MWNlOGEzODhmMTcwMWZlNWTDh5qi: 00:26:11.165 20:35:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:26:11.165 20:35:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:11.165 20:35:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:11.165 20:35:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:11.165 20:35:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:11.165 20:35:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:11.165 20:35:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:11.165 20:35:23 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:11.165 20:35:23 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.165 20:35:23 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:11.165 20:35:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:11.165 20:35:23 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:11.165 20:35:23 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:11.165 20:35:23 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:11.165 20:35:23 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:11.165 20:35:23 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:11.165 20:35:23 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:11.165 20:35:23 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:11.165 20:35:23 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:11.165 20:35:23 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:11.165 20:35:23 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:11.165 20:35:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:11.165 20:35:23 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:11.165 20:35:23 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.730 nvme0n1 00:26:11.730 20:35:24 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:11.730 20:35:24 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:11.730 20:35:24 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:11.730 20:35:24 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:11.730 20:35:24 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.730 20:35:24 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:11.730 20:35:24 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:11.730 20:35:24 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:11.730 20:35:24 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:11.730 20:35:24 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.730 20:35:24 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:11.730 20:35:24 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:11.730 20:35:24 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:26:11.730 20:35:24 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:11.730 20:35:24 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:11.730 20:35:24 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:11.730 20:35:24 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:11.730 20:35:24 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Y2I5NWQ0ODg4NjJjMDczYWY2MGU3MzcxYmIyNWEyMzEyMzIxNjlmYmNkNDM2ZjFhnNIknw==: 00:26:11.730 20:35:24 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGU2NzMwMTEzMjlkODgxNTFlNDM3MjQxZTQ3ZTY4MmZGEVRs: 00:26:11.730 20:35:24 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:11.730 20:35:24 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:11.730 20:35:24 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Y2I5NWQ0ODg4NjJjMDczYWY2MGU3MzcxYmIyNWEyMzEyMzIxNjlmYmNkNDM2ZjFhnNIknw==: 00:26:11.730 20:35:24 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGU2NzMwMTEzMjlkODgxNTFlNDM3MjQxZTQ3ZTY4MmZGEVRs: ]] 00:26:11.730 20:35:24 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGU2NzMwMTEzMjlkODgxNTFlNDM3MjQxZTQ3ZTY4MmZGEVRs: 00:26:11.730 20:35:24 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:26:11.730 20:35:24 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:11.730 20:35:24 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:11.730 20:35:24 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:11.730 20:35:24 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:11.730 20:35:24 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:11.730 20:35:24 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:11.730 20:35:24 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:11.730 20:35:24 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.730 20:35:24 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:11.730 20:35:24 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:11.730 20:35:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:11.731 20:35:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:11.731 20:35:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:11.731 20:35:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:11.731 20:35:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:11.731 20:35:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:11.731 20:35:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:11.731 20:35:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:11.731 20:35:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:11.731 20:35:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:11.731 20:35:24 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:11.731 20:35:24 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:11.731 20:35:24 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.298 nvme0n1 00:26:12.298 20:35:25 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:12.298 20:35:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:12.298 20:35:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:12.298 20:35:25 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:12.298 20:35:25 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.298 20:35:25 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:12.555 20:35:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:12.555 20:35:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:12.555 20:35:25 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:12.555 20:35:25 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.555 20:35:25 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:12.555 20:35:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:12.555 20:35:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:26:12.555 20:35:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:12.555 20:35:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:12.555 20:35:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:12.555 20:35:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:12.555 20:35:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODQ1MGE5OTQ1OWE4NDE0NWU1YTc1ZmI5NzJmMWU0NWMwOTdmNmU4ZmQwNDdiMDEyYjc4NzIwOWY4OGI1ZDM1OD/T/uM=: 00:26:12.555 20:35:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:12.555 20:35:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:12.555 20:35:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:12.555 20:35:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODQ1MGE5OTQ1OWE4NDE0NWU1YTc1ZmI5NzJmMWU0NWMwOTdmNmU4ZmQwNDdiMDEyYjc4NzIwOWY4OGI1ZDM1OD/T/uM=: 00:26:12.555 20:35:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:12.555 20:35:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:26:12.555 20:35:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:12.555 20:35:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:12.555 20:35:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:12.555 20:35:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:12.555 20:35:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:12.555 20:35:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:12.555 20:35:25 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:12.555 20:35:25 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.555 20:35:25 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:12.555 20:35:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:12.555 20:35:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:12.555 20:35:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:12.555 20:35:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:12.555 20:35:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:12.555 20:35:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:12.555 20:35:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:12.555 20:35:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:12.555 20:35:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:12.555 20:35:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:12.555 20:35:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:12.555 20:35:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:12.555 20:35:25 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:12.555 20:35:25 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.120 nvme0n1 00:26:13.120 20:35:25 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:13.120 20:35:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:13.120 20:35:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:13.120 20:35:25 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:13.120 20:35:25 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.120 20:35:25 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:13.121 20:35:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:13.121 20:35:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:13.121 20:35:26 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:13.121 20:35:26 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.121 20:35:26 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:13.121 20:35:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:26:13.121 20:35:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:13.121 20:35:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:13.121 20:35:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:26:13.121 20:35:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:13.121 20:35:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:13.121 20:35:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:13.121 20:35:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:13.121 20:35:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGQzOTQwNmUyZDFkNzIxYTRhOTJiYTNkOGIxYjM1NGWd8XXT: 00:26:13.121 20:35:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWNiMTdlM2ZmNzg4ZTI3MGNlNTJjZjk5NDg2Njg3ZDY2ZmZjYTIxMzk5ZWYyYjBjYjY1NmY5MjNmZDBiZmYyNI/Fg9o=: 00:26:13.121 20:35:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:13.121 20:35:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:13.121 20:35:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGQzOTQwNmUyZDFkNzIxYTRhOTJiYTNkOGIxYjM1NGWd8XXT: 00:26:13.121 20:35:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWNiMTdlM2ZmNzg4ZTI3MGNlNTJjZjk5NDg2Njg3ZDY2ZmZjYTIxMzk5ZWYyYjBjYjY1NmY5MjNmZDBiZmYyNI/Fg9o=: ]] 00:26:13.121 20:35:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWNiMTdlM2ZmNzg4ZTI3MGNlNTJjZjk5NDg2Njg3ZDY2ZmZjYTIxMzk5ZWYyYjBjYjY1NmY5MjNmZDBiZmYyNI/Fg9o=: 00:26:13.121 20:35:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:26:13.121 20:35:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:13.121 20:35:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:13.121 20:35:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:13.121 20:35:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:13.121 20:35:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:13.121 20:35:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:13.121 20:35:26 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:13.121 20:35:26 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.121 20:35:26 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:13.121 20:35:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:13.121 20:35:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:13.121 20:35:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:13.121 20:35:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:13.121 20:35:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:13.121 20:35:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:13.121 20:35:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:13.121 20:35:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:13.121 20:35:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:13.121 20:35:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:13.121 20:35:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:13.121 20:35:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:13.121 20:35:26 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:13.121 20:35:26 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.379 nvme0n1 00:26:13.379 20:35:26 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:13.379 20:35:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:13.379 20:35:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:13.379 20:35:26 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:13.379 20:35:26 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.379 20:35:26 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:13.379 20:35:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:13.379 20:35:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:13.379 20:35:26 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:13.379 20:35:26 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.379 20:35:26 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:13.379 20:35:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:13.379 20:35:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:26:13.379 20:35:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:13.379 20:35:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:13.379 20:35:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:13.379 20:35:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:13.379 20:35:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTA5Nzc2MjY5OTRjNTk5MWRiNjNlYjk2N2QwYWJhNTQwYTg5N2FiMGUyYzYyNmUxSCxk9Q==: 00:26:13.379 20:35:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWQ3MGQ5Yjk3Y2E4NzUzM2ExNjljNGQ0MzBmNDU4YzRkODMyYzE1NzEzYmEwNGZmapuQ+w==: 00:26:13.379 20:35:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:13.379 20:35:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:13.379 20:35:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTA5Nzc2MjY5OTRjNTk5MWRiNjNlYjk2N2QwYWJhNTQwYTg5N2FiMGUyYzYyNmUxSCxk9Q==: 00:26:13.379 20:35:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWQ3MGQ5Yjk3Y2E4NzUzM2ExNjljNGQ0MzBmNDU4YzRkODMyYzE1NzEzYmEwNGZmapuQ+w==: ]] 00:26:13.379 20:35:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWQ3MGQ5Yjk3Y2E4NzUzM2ExNjljNGQ0MzBmNDU4YzRkODMyYzE1NzEzYmEwNGZmapuQ+w==: 00:26:13.379 20:35:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:26:13.379 20:35:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:13.379 20:35:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:13.379 20:35:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:13.379 20:35:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:13.379 20:35:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:13.379 20:35:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:13.379 20:35:26 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:13.379 20:35:26 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.379 20:35:26 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:13.379 20:35:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:13.379 20:35:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:13.379 20:35:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:13.379 20:35:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:13.379 20:35:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:13.379 20:35:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:13.379 20:35:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:13.379 20:35:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:13.379 20:35:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:13.379 20:35:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:13.379 20:35:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:13.379 20:35:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:13.379 20:35:26 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:13.379 20:35:26 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.636 nvme0n1 00:26:13.636 20:35:26 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:13.636 20:35:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:13.636 20:35:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:13.636 20:35:26 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:13.636 20:35:26 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.636 20:35:26 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:13.636 20:35:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:13.637 20:35:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:13.637 20:35:26 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:13.637 20:35:26 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.637 20:35:26 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:13.637 20:35:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:13.637 20:35:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:26:13.637 20:35:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:13.637 20:35:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:13.637 20:35:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:13.637 20:35:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:13.637 20:35:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzYxZjFlY2IzZGE5OWIwNWMyNTk0MWQ3MzYzMTI2YmQRBV+K: 00:26:13.637 20:35:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OWFmZGZlMmY3YTgzNjA2MWNlOGEzODhmMTcwMWZlNWTDh5qi: 00:26:13.637 20:35:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:13.637 20:35:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:13.637 20:35:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzYxZjFlY2IzZGE5OWIwNWMyNTk0MWQ3MzYzMTI2YmQRBV+K: 00:26:13.637 20:35:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OWFmZGZlMmY3YTgzNjA2MWNlOGEzODhmMTcwMWZlNWTDh5qi: ]] 00:26:13.637 20:35:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OWFmZGZlMmY3YTgzNjA2MWNlOGEzODhmMTcwMWZlNWTDh5qi: 00:26:13.637 20:35:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:26:13.637 20:35:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:13.637 20:35:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:13.637 20:35:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:13.637 20:35:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:13.637 20:35:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:13.637 20:35:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:13.637 20:35:26 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:13.637 20:35:26 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.637 20:35:26 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:13.637 20:35:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:13.637 20:35:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:13.637 20:35:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:13.637 20:35:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:13.637 20:35:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:13.637 20:35:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:13.637 20:35:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:13.637 20:35:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:13.637 20:35:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:13.637 20:35:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:13.637 20:35:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:13.637 20:35:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:13.894 20:35:26 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:13.894 20:35:26 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.894 nvme0n1 00:26:13.894 20:35:26 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:13.894 20:35:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:13.894 20:35:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:13.894 20:35:26 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:13.894 20:35:26 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.894 20:35:26 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:13.894 20:35:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:13.894 20:35:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:13.894 20:35:26 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:13.894 20:35:26 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.152 20:35:26 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:14.152 20:35:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:14.152 20:35:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:26:14.152 20:35:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:14.152 20:35:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:14.152 20:35:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:14.152 20:35:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:14.152 20:35:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Y2I5NWQ0ODg4NjJjMDczYWY2MGU3MzcxYmIyNWEyMzEyMzIxNjlmYmNkNDM2ZjFhnNIknw==: 00:26:14.152 20:35:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGU2NzMwMTEzMjlkODgxNTFlNDM3MjQxZTQ3ZTY4MmZGEVRs: 00:26:14.152 20:35:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:14.152 20:35:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:14.152 20:35:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Y2I5NWQ0ODg4NjJjMDczYWY2MGU3MzcxYmIyNWEyMzEyMzIxNjlmYmNkNDM2ZjFhnNIknw==: 00:26:14.152 20:35:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGU2NzMwMTEzMjlkODgxNTFlNDM3MjQxZTQ3ZTY4MmZGEVRs: ]] 00:26:14.152 20:35:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGU2NzMwMTEzMjlkODgxNTFlNDM3MjQxZTQ3ZTY4MmZGEVRs: 00:26:14.152 20:35:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:26:14.152 20:35:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:14.152 20:35:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:14.152 20:35:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:14.152 20:35:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:14.152 20:35:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:14.152 20:35:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:14.152 20:35:26 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:14.152 20:35:26 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.152 20:35:26 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:14.152 20:35:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:14.152 20:35:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:14.152 20:35:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:14.152 20:35:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:14.152 20:35:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:14.152 20:35:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:14.152 20:35:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:14.152 20:35:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:14.152 20:35:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:14.152 20:35:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:14.152 20:35:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:14.152 20:35:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:14.152 20:35:26 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:14.152 20:35:26 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.152 nvme0n1 00:26:14.152 20:35:27 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:14.152 20:35:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:14.152 20:35:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:14.152 20:35:27 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:14.153 20:35:27 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.153 20:35:27 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:14.419 20:35:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:14.419 20:35:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:14.419 20:35:27 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:14.419 20:35:27 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.419 20:35:27 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:14.419 20:35:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:14.419 20:35:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:26:14.419 20:35:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:14.419 20:35:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:14.419 20:35:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:14.419 20:35:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:14.419 20:35:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODQ1MGE5OTQ1OWE4NDE0NWU1YTc1ZmI5NzJmMWU0NWMwOTdmNmU4ZmQwNDdiMDEyYjc4NzIwOWY4OGI1ZDM1OD/T/uM=: 00:26:14.419 20:35:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:14.419 20:35:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:14.419 20:35:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:14.419 20:35:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODQ1MGE5OTQ1OWE4NDE0NWU1YTc1ZmI5NzJmMWU0NWMwOTdmNmU4ZmQwNDdiMDEyYjc4NzIwOWY4OGI1ZDM1OD/T/uM=: 00:26:14.419 20:35:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:14.419 20:35:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:26:14.419 20:35:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:14.419 20:35:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:14.419 20:35:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:14.419 20:35:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:14.419 20:35:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:14.420 20:35:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:14.420 20:35:27 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:14.420 20:35:27 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.420 20:35:27 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:14.420 20:35:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:14.420 20:35:27 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:14.420 20:35:27 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:14.420 20:35:27 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:14.420 20:35:27 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:14.420 20:35:27 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:14.420 20:35:27 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:14.420 20:35:27 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:14.420 20:35:27 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:14.420 20:35:27 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:14.420 20:35:27 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:14.420 20:35:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:14.420 20:35:27 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:14.420 20:35:27 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.731 nvme0n1 00:26:14.731 20:35:27 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:14.731 20:35:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:14.731 20:35:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:14.731 20:35:27 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:14.731 20:35:27 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.731 20:35:27 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:14.731 20:35:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:14.731 20:35:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:14.731 20:35:27 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:14.731 20:35:27 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.731 20:35:27 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:14.731 20:35:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:14.731 20:35:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:14.731 20:35:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:26:14.731 20:35:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:14.731 20:35:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:14.731 20:35:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:14.731 20:35:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:14.731 20:35:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGQzOTQwNmUyZDFkNzIxYTRhOTJiYTNkOGIxYjM1NGWd8XXT: 00:26:14.731 20:35:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWNiMTdlM2ZmNzg4ZTI3MGNlNTJjZjk5NDg2Njg3ZDY2ZmZjYTIxMzk5ZWYyYjBjYjY1NmY5MjNmZDBiZmYyNI/Fg9o=: 00:26:14.731 20:35:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:14.731 20:35:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:14.731 20:35:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGQzOTQwNmUyZDFkNzIxYTRhOTJiYTNkOGIxYjM1NGWd8XXT: 00:26:14.731 20:35:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWNiMTdlM2ZmNzg4ZTI3MGNlNTJjZjk5NDg2Njg3ZDY2ZmZjYTIxMzk5ZWYyYjBjYjY1NmY5MjNmZDBiZmYyNI/Fg9o=: ]] 00:26:14.731 20:35:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWNiMTdlM2ZmNzg4ZTI3MGNlNTJjZjk5NDg2Njg3ZDY2ZmZjYTIxMzk5ZWYyYjBjYjY1NmY5MjNmZDBiZmYyNI/Fg9o=: 00:26:14.731 20:35:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:26:14.731 20:35:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:14.731 20:35:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:14.731 20:35:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:14.731 20:35:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:14.731 20:35:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:14.731 20:35:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:14.731 20:35:27 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:14.731 20:35:27 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.731 20:35:27 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:14.731 20:35:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:14.731 20:35:27 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:14.731 20:35:27 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:14.731 20:35:27 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:14.731 20:35:27 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:14.731 20:35:27 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:14.731 20:35:27 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:14.731 20:35:27 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:14.731 20:35:27 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:14.731 20:35:27 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:14.731 20:35:27 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:14.731 20:35:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:14.731 20:35:27 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:14.731 20:35:27 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.012 nvme0n1 00:26:15.012 20:35:27 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:15.012 20:35:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:15.012 20:35:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:15.012 20:35:27 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:15.012 20:35:27 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.012 20:35:27 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:15.012 20:35:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:15.012 20:35:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:15.012 20:35:27 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:15.012 20:35:27 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.012 20:35:27 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:15.012 20:35:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:15.012 20:35:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:26:15.012 20:35:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:15.012 20:35:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:15.012 20:35:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:15.012 20:35:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:15.012 20:35:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTA5Nzc2MjY5OTRjNTk5MWRiNjNlYjk2N2QwYWJhNTQwYTg5N2FiMGUyYzYyNmUxSCxk9Q==: 00:26:15.012 20:35:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWQ3MGQ5Yjk3Y2E4NzUzM2ExNjljNGQ0MzBmNDU4YzRkODMyYzE1NzEzYmEwNGZmapuQ+w==: 00:26:15.012 20:35:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:15.012 20:35:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:15.012 20:35:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTA5Nzc2MjY5OTRjNTk5MWRiNjNlYjk2N2QwYWJhNTQwYTg5N2FiMGUyYzYyNmUxSCxk9Q==: 00:26:15.012 20:35:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWQ3MGQ5Yjk3Y2E4NzUzM2ExNjljNGQ0MzBmNDU4YzRkODMyYzE1NzEzYmEwNGZmapuQ+w==: ]] 00:26:15.012 20:35:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWQ3MGQ5Yjk3Y2E4NzUzM2ExNjljNGQ0MzBmNDU4YzRkODMyYzE1NzEzYmEwNGZmapuQ+w==: 00:26:15.012 20:35:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:26:15.012 20:35:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:15.012 20:35:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:15.012 20:35:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:15.012 20:35:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:15.012 20:35:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:15.012 20:35:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:15.012 20:35:27 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:15.012 20:35:27 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.012 20:35:27 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:15.012 20:35:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:15.012 20:35:27 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:15.012 20:35:27 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:15.012 20:35:27 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:15.012 20:35:27 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:15.012 20:35:27 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:15.012 20:35:27 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:15.012 20:35:27 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:15.012 20:35:27 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:15.012 20:35:27 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:15.012 20:35:27 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:15.012 20:35:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:15.012 20:35:27 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:15.012 20:35:27 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.268 nvme0n1 00:26:15.268 20:35:28 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:15.268 20:35:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:15.268 20:35:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:15.268 20:35:28 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:15.268 20:35:28 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.268 20:35:28 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:15.268 20:35:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:15.268 20:35:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:15.268 20:35:28 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:15.268 20:35:28 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.268 20:35:28 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:15.268 20:35:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:15.268 20:35:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:26:15.268 20:35:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:15.268 20:35:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:15.268 20:35:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:15.268 20:35:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:15.268 20:35:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzYxZjFlY2IzZGE5OWIwNWMyNTk0MWQ3MzYzMTI2YmQRBV+K: 00:26:15.268 20:35:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OWFmZGZlMmY3YTgzNjA2MWNlOGEzODhmMTcwMWZlNWTDh5qi: 00:26:15.268 20:35:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:15.268 20:35:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:15.268 20:35:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzYxZjFlY2IzZGE5OWIwNWMyNTk0MWQ3MzYzMTI2YmQRBV+K: 00:26:15.268 20:35:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OWFmZGZlMmY3YTgzNjA2MWNlOGEzODhmMTcwMWZlNWTDh5qi: ]] 00:26:15.268 20:35:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OWFmZGZlMmY3YTgzNjA2MWNlOGEzODhmMTcwMWZlNWTDh5qi: 00:26:15.268 20:35:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:26:15.268 20:35:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:15.268 20:35:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:15.268 20:35:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:15.268 20:35:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:15.268 20:35:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:15.268 20:35:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:15.268 20:35:28 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:15.268 20:35:28 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.268 20:35:28 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:15.268 20:35:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:15.268 20:35:28 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:15.268 20:35:28 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:15.268 20:35:28 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:15.269 20:35:28 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:15.269 20:35:28 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:15.269 20:35:28 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:15.269 20:35:28 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:15.269 20:35:28 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:15.269 20:35:28 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:15.269 20:35:28 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:15.269 20:35:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:15.269 20:35:28 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:15.269 20:35:28 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.526 nvme0n1 00:26:15.526 20:35:28 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:15.526 20:35:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:15.526 20:35:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:15.526 20:35:28 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:15.526 20:35:28 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.526 20:35:28 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:15.526 20:35:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:15.526 20:35:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:15.526 20:35:28 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:15.526 20:35:28 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.526 20:35:28 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:15.526 20:35:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:15.526 20:35:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:26:15.526 20:35:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:15.526 20:35:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:15.526 20:35:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:15.526 20:35:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:15.526 20:35:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Y2I5NWQ0ODg4NjJjMDczYWY2MGU3MzcxYmIyNWEyMzEyMzIxNjlmYmNkNDM2ZjFhnNIknw==: 00:26:15.526 20:35:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGU2NzMwMTEzMjlkODgxNTFlNDM3MjQxZTQ3ZTY4MmZGEVRs: 00:26:15.526 20:35:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:15.526 20:35:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:15.526 20:35:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Y2I5NWQ0ODg4NjJjMDczYWY2MGU3MzcxYmIyNWEyMzEyMzIxNjlmYmNkNDM2ZjFhnNIknw==: 00:26:15.526 20:35:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGU2NzMwMTEzMjlkODgxNTFlNDM3MjQxZTQ3ZTY4MmZGEVRs: ]] 00:26:15.526 20:35:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGU2NzMwMTEzMjlkODgxNTFlNDM3MjQxZTQ3ZTY4MmZGEVRs: 00:26:15.526 20:35:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:26:15.526 20:35:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:15.526 20:35:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:15.526 20:35:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:15.526 20:35:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:15.526 20:35:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:15.526 20:35:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:15.526 20:35:28 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:15.527 20:35:28 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.527 20:35:28 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:15.527 20:35:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:15.527 20:35:28 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:15.527 20:35:28 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:15.527 20:35:28 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:15.527 20:35:28 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:15.527 20:35:28 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:15.527 20:35:28 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:15.527 20:35:28 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:15.527 20:35:28 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:15.527 20:35:28 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:15.527 20:35:28 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:15.527 20:35:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:15.527 20:35:28 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:15.527 20:35:28 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.785 nvme0n1 00:26:15.785 20:35:28 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:15.785 20:35:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:15.785 20:35:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:15.785 20:35:28 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:15.785 20:35:28 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.785 20:35:28 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:15.785 20:35:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:15.785 20:35:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:15.785 20:35:28 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:15.785 20:35:28 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.785 20:35:28 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:15.785 20:35:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:15.785 20:35:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:26:15.785 20:35:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:15.785 20:35:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:15.785 20:35:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:15.785 20:35:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:15.785 20:35:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODQ1MGE5OTQ1OWE4NDE0NWU1YTc1ZmI5NzJmMWU0NWMwOTdmNmU4ZmQwNDdiMDEyYjc4NzIwOWY4OGI1ZDM1OD/T/uM=: 00:26:15.785 20:35:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:15.785 20:35:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:15.785 20:35:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:15.785 20:35:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODQ1MGE5OTQ1OWE4NDE0NWU1YTc1ZmI5NzJmMWU0NWMwOTdmNmU4ZmQwNDdiMDEyYjc4NzIwOWY4OGI1ZDM1OD/T/uM=: 00:26:15.785 20:35:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:15.785 20:35:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:26:15.785 20:35:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:15.785 20:35:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:15.785 20:35:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:15.785 20:35:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:15.785 20:35:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:15.785 20:35:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:15.785 20:35:28 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:15.785 20:35:28 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.043 20:35:28 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:16.043 20:35:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:16.043 20:35:28 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:16.043 20:35:28 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:16.043 20:35:28 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:16.043 20:35:28 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:16.043 20:35:28 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:16.043 20:35:28 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:16.043 20:35:28 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:16.043 20:35:28 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:16.043 20:35:28 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:16.043 20:35:28 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:16.043 20:35:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:16.043 20:35:28 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:16.043 20:35:28 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.043 nvme0n1 00:26:16.043 20:35:29 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:16.043 20:35:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:16.043 20:35:29 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:16.043 20:35:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:16.043 20:35:29 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.301 20:35:29 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:16.301 20:35:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:16.301 20:35:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:16.301 20:35:29 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:16.301 20:35:29 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.301 20:35:29 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:16.301 20:35:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:16.301 20:35:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:16.301 20:35:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:26:16.301 20:35:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:16.301 20:35:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:16.301 20:35:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:16.301 20:35:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:16.301 20:35:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGQzOTQwNmUyZDFkNzIxYTRhOTJiYTNkOGIxYjM1NGWd8XXT: 00:26:16.301 20:35:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWNiMTdlM2ZmNzg4ZTI3MGNlNTJjZjk5NDg2Njg3ZDY2ZmZjYTIxMzk5ZWYyYjBjYjY1NmY5MjNmZDBiZmYyNI/Fg9o=: 00:26:16.301 20:35:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:16.301 20:35:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:16.301 20:35:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGQzOTQwNmUyZDFkNzIxYTRhOTJiYTNkOGIxYjM1NGWd8XXT: 00:26:16.301 20:35:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWNiMTdlM2ZmNzg4ZTI3MGNlNTJjZjk5NDg2Njg3ZDY2ZmZjYTIxMzk5ZWYyYjBjYjY1NmY5MjNmZDBiZmYyNI/Fg9o=: ]] 00:26:16.301 20:35:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWNiMTdlM2ZmNzg4ZTI3MGNlNTJjZjk5NDg2Njg3ZDY2ZmZjYTIxMzk5ZWYyYjBjYjY1NmY5MjNmZDBiZmYyNI/Fg9o=: 00:26:16.301 20:35:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:26:16.301 20:35:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:16.301 20:35:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:16.301 20:35:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:16.301 20:35:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:16.301 20:35:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:16.301 20:35:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:16.301 20:35:29 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:16.301 20:35:29 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.301 20:35:29 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:16.301 20:35:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:16.301 20:35:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:16.301 20:35:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:16.301 20:35:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:16.301 20:35:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:16.301 20:35:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:16.301 20:35:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:16.301 20:35:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:16.301 20:35:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:16.301 20:35:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:16.301 20:35:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:16.301 20:35:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:16.301 20:35:29 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:16.301 20:35:29 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.559 nvme0n1 00:26:16.559 20:35:29 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:16.559 20:35:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:16.559 20:35:29 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:16.559 20:35:29 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.559 20:35:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:16.559 20:35:29 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:16.559 20:35:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:16.559 20:35:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:16.559 20:35:29 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:16.559 20:35:29 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.559 20:35:29 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:16.559 20:35:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:16.559 20:35:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:26:16.559 20:35:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:16.559 20:35:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:16.559 20:35:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:16.559 20:35:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:16.559 20:35:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTA5Nzc2MjY5OTRjNTk5MWRiNjNlYjk2N2QwYWJhNTQwYTg5N2FiMGUyYzYyNmUxSCxk9Q==: 00:26:16.559 20:35:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWQ3MGQ5Yjk3Y2E4NzUzM2ExNjljNGQ0MzBmNDU4YzRkODMyYzE1NzEzYmEwNGZmapuQ+w==: 00:26:16.559 20:35:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:16.559 20:35:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:16.559 20:35:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTA5Nzc2MjY5OTRjNTk5MWRiNjNlYjk2N2QwYWJhNTQwYTg5N2FiMGUyYzYyNmUxSCxk9Q==: 00:26:16.559 20:35:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWQ3MGQ5Yjk3Y2E4NzUzM2ExNjljNGQ0MzBmNDU4YzRkODMyYzE1NzEzYmEwNGZmapuQ+w==: ]] 00:26:16.559 20:35:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWQ3MGQ5Yjk3Y2E4NzUzM2ExNjljNGQ0MzBmNDU4YzRkODMyYzE1NzEzYmEwNGZmapuQ+w==: 00:26:16.559 20:35:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:26:16.559 20:35:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:16.559 20:35:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:16.559 20:35:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:16.559 20:35:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:16.559 20:35:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:16.559 20:35:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:16.559 20:35:29 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:16.559 20:35:29 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.559 20:35:29 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:16.559 20:35:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:16.559 20:35:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:16.559 20:35:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:16.559 20:35:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:16.559 20:35:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:16.559 20:35:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:16.559 20:35:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:16.559 20:35:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:16.559 20:35:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:16.559 20:35:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:16.559 20:35:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:16.559 20:35:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:16.559 20:35:29 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:16.559 20:35:29 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.126 nvme0n1 00:26:17.126 20:35:29 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.126 20:35:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:17.126 20:35:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:17.126 20:35:29 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:17.126 20:35:29 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.126 20:35:29 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.126 20:35:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:17.126 20:35:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:17.126 20:35:29 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:17.126 20:35:29 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.126 20:35:29 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.126 20:35:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:17.126 20:35:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:26:17.126 20:35:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:17.126 20:35:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:17.126 20:35:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:17.126 20:35:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:17.126 20:35:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzYxZjFlY2IzZGE5OWIwNWMyNTk0MWQ3MzYzMTI2YmQRBV+K: 00:26:17.126 20:35:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OWFmZGZlMmY3YTgzNjA2MWNlOGEzODhmMTcwMWZlNWTDh5qi: 00:26:17.126 20:35:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:17.126 20:35:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:17.126 20:35:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzYxZjFlY2IzZGE5OWIwNWMyNTk0MWQ3MzYzMTI2YmQRBV+K: 00:26:17.126 20:35:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OWFmZGZlMmY3YTgzNjA2MWNlOGEzODhmMTcwMWZlNWTDh5qi: ]] 00:26:17.126 20:35:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OWFmZGZlMmY3YTgzNjA2MWNlOGEzODhmMTcwMWZlNWTDh5qi: 00:26:17.126 20:35:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:26:17.126 20:35:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:17.126 20:35:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:17.126 20:35:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:17.126 20:35:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:17.126 20:35:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:17.126 20:35:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:17.126 20:35:29 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:17.126 20:35:29 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.126 20:35:29 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.126 20:35:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:17.126 20:35:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:17.126 20:35:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:17.126 20:35:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:17.126 20:35:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:17.126 20:35:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:17.126 20:35:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:17.126 20:35:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:17.126 20:35:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:17.126 20:35:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:17.126 20:35:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:17.126 20:35:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:17.126 20:35:29 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:17.126 20:35:29 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.385 nvme0n1 00:26:17.385 20:35:30 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.385 20:35:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:17.385 20:35:30 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:17.385 20:35:30 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.385 20:35:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:17.385 20:35:30 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.385 20:35:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:17.385 20:35:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:17.385 20:35:30 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:17.385 20:35:30 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.385 20:35:30 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.385 20:35:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:17.385 20:35:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:26:17.385 20:35:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:17.385 20:35:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:17.385 20:35:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:17.385 20:35:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:17.385 20:35:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Y2I5NWQ0ODg4NjJjMDczYWY2MGU3MzcxYmIyNWEyMzEyMzIxNjlmYmNkNDM2ZjFhnNIknw==: 00:26:17.385 20:35:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGU2NzMwMTEzMjlkODgxNTFlNDM3MjQxZTQ3ZTY4MmZGEVRs: 00:26:17.385 20:35:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:17.385 20:35:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:17.385 20:35:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Y2I5NWQ0ODg4NjJjMDczYWY2MGU3MzcxYmIyNWEyMzEyMzIxNjlmYmNkNDM2ZjFhnNIknw==: 00:26:17.385 20:35:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGU2NzMwMTEzMjlkODgxNTFlNDM3MjQxZTQ3ZTY4MmZGEVRs: ]] 00:26:17.385 20:35:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGU2NzMwMTEzMjlkODgxNTFlNDM3MjQxZTQ3ZTY4MmZGEVRs: 00:26:17.385 20:35:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:26:17.385 20:35:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:17.385 20:35:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:17.385 20:35:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:17.385 20:35:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:17.385 20:35:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:17.385 20:35:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:17.385 20:35:30 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:17.385 20:35:30 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.385 20:35:30 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.385 20:35:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:17.385 20:35:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:17.385 20:35:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:17.385 20:35:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:17.385 20:35:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:17.385 20:35:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:17.385 20:35:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:17.385 20:35:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:17.385 20:35:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:17.385 20:35:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:17.385 20:35:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:17.385 20:35:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:17.385 20:35:30 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:17.385 20:35:30 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.643 nvme0n1 00:26:17.643 20:35:30 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.643 20:35:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:17.643 20:35:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:17.643 20:35:30 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:17.643 20:35:30 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.643 20:35:30 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.643 20:35:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:17.643 20:35:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:17.643 20:35:30 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:17.643 20:35:30 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.900 20:35:30 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.900 20:35:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:17.900 20:35:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:26:17.900 20:35:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:17.900 20:35:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:17.900 20:35:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:17.901 20:35:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:17.901 20:35:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODQ1MGE5OTQ1OWE4NDE0NWU1YTc1ZmI5NzJmMWU0NWMwOTdmNmU4ZmQwNDdiMDEyYjc4NzIwOWY4OGI1ZDM1OD/T/uM=: 00:26:17.901 20:35:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:17.901 20:35:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:17.901 20:35:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:17.901 20:35:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODQ1MGE5OTQ1OWE4NDE0NWU1YTc1ZmI5NzJmMWU0NWMwOTdmNmU4ZmQwNDdiMDEyYjc4NzIwOWY4OGI1ZDM1OD/T/uM=: 00:26:17.901 20:35:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:17.901 20:35:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:26:17.901 20:35:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:17.901 20:35:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:17.901 20:35:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:17.901 20:35:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:17.901 20:35:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:17.901 20:35:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:17.901 20:35:30 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:17.901 20:35:30 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.901 20:35:30 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.901 20:35:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:17.901 20:35:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:17.901 20:35:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:17.901 20:35:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:17.901 20:35:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:17.901 20:35:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:17.901 20:35:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:17.901 20:35:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:17.901 20:35:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:17.901 20:35:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:17.901 20:35:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:17.901 20:35:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:17.901 20:35:30 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:17.901 20:35:30 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.159 nvme0n1 00:26:18.159 20:35:30 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:18.159 20:35:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:18.159 20:35:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:18.159 20:35:30 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:18.159 20:35:30 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.159 20:35:31 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:18.159 20:35:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:18.159 20:35:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:18.159 20:35:31 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:18.159 20:35:31 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.159 20:35:31 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:18.159 20:35:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:18.159 20:35:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:18.159 20:35:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:26:18.159 20:35:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:18.159 20:35:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:18.159 20:35:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:18.159 20:35:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:18.159 20:35:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGQzOTQwNmUyZDFkNzIxYTRhOTJiYTNkOGIxYjM1NGWd8XXT: 00:26:18.159 20:35:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWNiMTdlM2ZmNzg4ZTI3MGNlNTJjZjk5NDg2Njg3ZDY2ZmZjYTIxMzk5ZWYyYjBjYjY1NmY5MjNmZDBiZmYyNI/Fg9o=: 00:26:18.159 20:35:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:18.159 20:35:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:18.159 20:35:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGQzOTQwNmUyZDFkNzIxYTRhOTJiYTNkOGIxYjM1NGWd8XXT: 00:26:18.159 20:35:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWNiMTdlM2ZmNzg4ZTI3MGNlNTJjZjk5NDg2Njg3ZDY2ZmZjYTIxMzk5ZWYyYjBjYjY1NmY5MjNmZDBiZmYyNI/Fg9o=: ]] 00:26:18.159 20:35:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWNiMTdlM2ZmNzg4ZTI3MGNlNTJjZjk5NDg2Njg3ZDY2ZmZjYTIxMzk5ZWYyYjBjYjY1NmY5MjNmZDBiZmYyNI/Fg9o=: 00:26:18.159 20:35:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:26:18.159 20:35:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:18.159 20:35:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:18.159 20:35:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:18.159 20:35:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:18.159 20:35:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:18.159 20:35:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:18.159 20:35:31 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:18.159 20:35:31 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.159 20:35:31 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:18.159 20:35:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:18.159 20:35:31 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:18.159 20:35:31 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:18.159 20:35:31 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:18.159 20:35:31 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:18.159 20:35:31 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:18.159 20:35:31 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:18.159 20:35:31 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:18.159 20:35:31 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:18.159 20:35:31 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:18.159 20:35:31 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:18.160 20:35:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:18.160 20:35:31 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:18.160 20:35:31 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.725 nvme0n1 00:26:18.725 20:35:31 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:18.725 20:35:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:18.725 20:35:31 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:18.725 20:35:31 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.725 20:35:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:18.725 20:35:31 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:18.725 20:35:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:18.725 20:35:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:18.725 20:35:31 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:18.725 20:35:31 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.725 20:35:31 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:18.725 20:35:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:18.725 20:35:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:26:18.725 20:35:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:18.725 20:35:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:18.725 20:35:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:18.725 20:35:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:18.725 20:35:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTA5Nzc2MjY5OTRjNTk5MWRiNjNlYjk2N2QwYWJhNTQwYTg5N2FiMGUyYzYyNmUxSCxk9Q==: 00:26:18.725 20:35:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWQ3MGQ5Yjk3Y2E4NzUzM2ExNjljNGQ0MzBmNDU4YzRkODMyYzE1NzEzYmEwNGZmapuQ+w==: 00:26:18.725 20:35:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:18.725 20:35:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:18.725 20:35:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTA5Nzc2MjY5OTRjNTk5MWRiNjNlYjk2N2QwYWJhNTQwYTg5N2FiMGUyYzYyNmUxSCxk9Q==: 00:26:18.725 20:35:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWQ3MGQ5Yjk3Y2E4NzUzM2ExNjljNGQ0MzBmNDU4YzRkODMyYzE1NzEzYmEwNGZmapuQ+w==: ]] 00:26:18.725 20:35:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWQ3MGQ5Yjk3Y2E4NzUzM2ExNjljNGQ0MzBmNDU4YzRkODMyYzE1NzEzYmEwNGZmapuQ+w==: 00:26:18.725 20:35:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:26:18.725 20:35:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:18.725 20:35:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:18.725 20:35:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:18.725 20:35:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:18.725 20:35:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:18.725 20:35:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:18.725 20:35:31 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:18.725 20:35:31 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.725 20:35:31 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:18.725 20:35:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:18.725 20:35:31 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:18.725 20:35:31 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:18.725 20:35:31 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:18.725 20:35:31 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:18.725 20:35:31 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:18.725 20:35:31 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:18.725 20:35:31 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:18.725 20:35:31 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:18.725 20:35:31 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:18.725 20:35:31 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:18.725 20:35:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:18.725 20:35:31 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:18.725 20:35:31 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.292 nvme0n1 00:26:19.292 20:35:32 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:19.292 20:35:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:19.292 20:35:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:19.292 20:35:32 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:19.292 20:35:32 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.292 20:35:32 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:19.292 20:35:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:19.292 20:35:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:19.292 20:35:32 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:19.292 20:35:32 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.292 20:35:32 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:19.292 20:35:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:19.292 20:35:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:26:19.292 20:35:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:19.292 20:35:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:19.292 20:35:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:19.292 20:35:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:19.292 20:35:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzYxZjFlY2IzZGE5OWIwNWMyNTk0MWQ3MzYzMTI2YmQRBV+K: 00:26:19.292 20:35:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OWFmZGZlMmY3YTgzNjA2MWNlOGEzODhmMTcwMWZlNWTDh5qi: 00:26:19.292 20:35:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:19.292 20:35:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:19.292 20:35:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzYxZjFlY2IzZGE5OWIwNWMyNTk0MWQ3MzYzMTI2YmQRBV+K: 00:26:19.292 20:35:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OWFmZGZlMmY3YTgzNjA2MWNlOGEzODhmMTcwMWZlNWTDh5qi: ]] 00:26:19.292 20:35:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OWFmZGZlMmY3YTgzNjA2MWNlOGEzODhmMTcwMWZlNWTDh5qi: 00:26:19.292 20:35:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:26:19.292 20:35:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:19.292 20:35:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:19.292 20:35:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:19.292 20:35:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:19.292 20:35:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:19.292 20:35:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:19.292 20:35:32 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:19.292 20:35:32 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.292 20:35:32 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:19.292 20:35:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:19.292 20:35:32 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:19.292 20:35:32 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:19.292 20:35:32 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:19.292 20:35:32 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:19.292 20:35:32 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:19.292 20:35:32 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:19.292 20:35:32 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:19.292 20:35:32 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:19.292 20:35:32 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:19.292 20:35:32 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:19.292 20:35:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:19.292 20:35:32 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:19.292 20:35:32 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.550 nvme0n1 00:26:19.550 20:35:32 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:19.550 20:35:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:19.550 20:35:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:19.550 20:35:32 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:19.550 20:35:32 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.808 20:35:32 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:19.808 20:35:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:19.808 20:35:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:19.808 20:35:32 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:19.808 20:35:32 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.808 20:35:32 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:19.808 20:35:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:19.808 20:35:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:26:19.808 20:35:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:19.808 20:35:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:19.808 20:35:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:19.808 20:35:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:19.808 20:35:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Y2I5NWQ0ODg4NjJjMDczYWY2MGU3MzcxYmIyNWEyMzEyMzIxNjlmYmNkNDM2ZjFhnNIknw==: 00:26:19.808 20:35:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGU2NzMwMTEzMjlkODgxNTFlNDM3MjQxZTQ3ZTY4MmZGEVRs: 00:26:19.808 20:35:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:19.808 20:35:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:19.808 20:35:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Y2I5NWQ0ODg4NjJjMDczYWY2MGU3MzcxYmIyNWEyMzEyMzIxNjlmYmNkNDM2ZjFhnNIknw==: 00:26:19.808 20:35:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGU2NzMwMTEzMjlkODgxNTFlNDM3MjQxZTQ3ZTY4MmZGEVRs: ]] 00:26:19.808 20:35:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGU2NzMwMTEzMjlkODgxNTFlNDM3MjQxZTQ3ZTY4MmZGEVRs: 00:26:19.808 20:35:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:26:19.808 20:35:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:19.808 20:35:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:19.808 20:35:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:19.808 20:35:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:19.808 20:35:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:19.808 20:35:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:19.808 20:35:32 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:19.808 20:35:32 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.808 20:35:32 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:19.808 20:35:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:19.808 20:35:32 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:19.808 20:35:32 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:19.808 20:35:32 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:19.808 20:35:32 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:19.808 20:35:32 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:19.808 20:35:32 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:19.808 20:35:32 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:19.808 20:35:32 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:19.808 20:35:32 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:19.808 20:35:32 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:19.808 20:35:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:19.808 20:35:32 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:19.808 20:35:32 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.065 nvme0n1 00:26:20.065 20:35:33 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:20.065 20:35:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:20.065 20:35:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:20.065 20:35:33 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:20.065 20:35:33 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.065 20:35:33 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:20.323 20:35:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:20.323 20:35:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:20.323 20:35:33 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:20.323 20:35:33 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.323 20:35:33 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:20.323 20:35:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:20.323 20:35:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:26:20.323 20:35:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:20.323 20:35:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:20.323 20:35:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:20.323 20:35:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:20.323 20:35:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODQ1MGE5OTQ1OWE4NDE0NWU1YTc1ZmI5NzJmMWU0NWMwOTdmNmU4ZmQwNDdiMDEyYjc4NzIwOWY4OGI1ZDM1OD/T/uM=: 00:26:20.323 20:35:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:20.323 20:35:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:20.323 20:35:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:20.323 20:35:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODQ1MGE5OTQ1OWE4NDE0NWU1YTc1ZmI5NzJmMWU0NWMwOTdmNmU4ZmQwNDdiMDEyYjc4NzIwOWY4OGI1ZDM1OD/T/uM=: 00:26:20.323 20:35:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:20.323 20:35:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:26:20.323 20:35:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:20.323 20:35:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:20.323 20:35:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:20.323 20:35:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:20.323 20:35:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:20.323 20:35:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:20.323 20:35:33 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:20.323 20:35:33 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.323 20:35:33 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:20.323 20:35:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:20.323 20:35:33 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:20.323 20:35:33 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:20.323 20:35:33 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:20.323 20:35:33 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:20.323 20:35:33 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:20.323 20:35:33 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:20.323 20:35:33 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:20.323 20:35:33 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:20.323 20:35:33 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:20.323 20:35:33 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:20.323 20:35:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:20.323 20:35:33 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:20.323 20:35:33 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.580 nvme0n1 00:26:20.580 20:35:33 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:20.580 20:35:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:20.580 20:35:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:20.580 20:35:33 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:20.580 20:35:33 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.580 20:35:33 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:20.838 20:35:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:20.838 20:35:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:20.838 20:35:33 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:20.838 20:35:33 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.838 20:35:33 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:20.838 20:35:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:20.838 20:35:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:20.838 20:35:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:26:20.838 20:35:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:20.838 20:35:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:20.838 20:35:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:20.838 20:35:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:20.838 20:35:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGQzOTQwNmUyZDFkNzIxYTRhOTJiYTNkOGIxYjM1NGWd8XXT: 00:26:20.838 20:35:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWNiMTdlM2ZmNzg4ZTI3MGNlNTJjZjk5NDg2Njg3ZDY2ZmZjYTIxMzk5ZWYyYjBjYjY1NmY5MjNmZDBiZmYyNI/Fg9o=: 00:26:20.838 20:35:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:20.838 20:35:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:20.838 20:35:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGQzOTQwNmUyZDFkNzIxYTRhOTJiYTNkOGIxYjM1NGWd8XXT: 00:26:20.838 20:35:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWNiMTdlM2ZmNzg4ZTI3MGNlNTJjZjk5NDg2Njg3ZDY2ZmZjYTIxMzk5ZWYyYjBjYjY1NmY5MjNmZDBiZmYyNI/Fg9o=: ]] 00:26:20.838 20:35:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWNiMTdlM2ZmNzg4ZTI3MGNlNTJjZjk5NDg2Njg3ZDY2ZmZjYTIxMzk5ZWYyYjBjYjY1NmY5MjNmZDBiZmYyNI/Fg9o=: 00:26:20.838 20:35:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:26:20.838 20:35:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:20.838 20:35:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:20.838 20:35:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:20.838 20:35:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:20.838 20:35:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:20.838 20:35:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:20.838 20:35:33 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:20.838 20:35:33 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.838 20:35:33 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:20.838 20:35:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:20.838 20:35:33 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:20.838 20:35:33 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:20.838 20:35:33 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:20.838 20:35:33 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:20.838 20:35:33 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:20.838 20:35:33 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:20.838 20:35:33 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:20.838 20:35:33 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:20.838 20:35:33 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:20.838 20:35:33 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:20.838 20:35:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:20.838 20:35:33 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:20.838 20:35:33 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.403 nvme0n1 00:26:21.403 20:35:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:21.403 20:35:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:21.403 20:35:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:21.403 20:35:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:21.403 20:35:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.403 20:35:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:21.403 20:35:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:21.403 20:35:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:21.403 20:35:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:21.403 20:35:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.403 20:35:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:21.403 20:35:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:21.403 20:35:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:26:21.403 20:35:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:21.403 20:35:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:21.403 20:35:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:21.403 20:35:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:21.403 20:35:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTA5Nzc2MjY5OTRjNTk5MWRiNjNlYjk2N2QwYWJhNTQwYTg5N2FiMGUyYzYyNmUxSCxk9Q==: 00:26:21.403 20:35:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWQ3MGQ5Yjk3Y2E4NzUzM2ExNjljNGQ0MzBmNDU4YzRkODMyYzE1NzEzYmEwNGZmapuQ+w==: 00:26:21.403 20:35:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:21.403 20:35:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:21.403 20:35:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTA5Nzc2MjY5OTRjNTk5MWRiNjNlYjk2N2QwYWJhNTQwYTg5N2FiMGUyYzYyNmUxSCxk9Q==: 00:26:21.403 20:35:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWQ3MGQ5Yjk3Y2E4NzUzM2ExNjljNGQ0MzBmNDU4YzRkODMyYzE1NzEzYmEwNGZmapuQ+w==: ]] 00:26:21.403 20:35:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWQ3MGQ5Yjk3Y2E4NzUzM2ExNjljNGQ0MzBmNDU4YzRkODMyYzE1NzEzYmEwNGZmapuQ+w==: 00:26:21.403 20:35:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:26:21.403 20:35:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:21.403 20:35:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:21.403 20:35:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:21.403 20:35:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:21.403 20:35:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:21.403 20:35:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:21.403 20:35:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:21.403 20:35:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.403 20:35:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:21.403 20:35:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:21.403 20:35:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:21.403 20:35:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:21.403 20:35:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:21.403 20:35:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:21.403 20:35:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:21.403 20:35:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:21.403 20:35:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:21.403 20:35:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:21.403 20:35:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:21.403 20:35:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:21.403 20:35:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:21.403 20:35:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:21.403 20:35:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.335 nvme0n1 00:26:22.335 20:35:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:22.335 20:35:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:22.335 20:35:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:22.335 20:35:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:22.335 20:35:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.335 20:35:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:22.335 20:35:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:22.335 20:35:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:22.335 20:35:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:22.335 20:35:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.335 20:35:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:22.335 20:35:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:22.335 20:35:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:26:22.335 20:35:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:22.335 20:35:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:22.335 20:35:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:22.335 20:35:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:22.335 20:35:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzYxZjFlY2IzZGE5OWIwNWMyNTk0MWQ3MzYzMTI2YmQRBV+K: 00:26:22.335 20:35:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OWFmZGZlMmY3YTgzNjA2MWNlOGEzODhmMTcwMWZlNWTDh5qi: 00:26:22.335 20:35:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:22.335 20:35:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:22.335 20:35:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzYxZjFlY2IzZGE5OWIwNWMyNTk0MWQ3MzYzMTI2YmQRBV+K: 00:26:22.335 20:35:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OWFmZGZlMmY3YTgzNjA2MWNlOGEzODhmMTcwMWZlNWTDh5qi: ]] 00:26:22.335 20:35:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OWFmZGZlMmY3YTgzNjA2MWNlOGEzODhmMTcwMWZlNWTDh5qi: 00:26:22.335 20:35:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:26:22.335 20:35:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:22.335 20:35:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:22.335 20:35:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:22.335 20:35:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:22.335 20:35:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:22.335 20:35:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:22.335 20:35:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:22.335 20:35:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.335 20:35:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:22.335 20:35:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:22.335 20:35:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:22.335 20:35:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:22.335 20:35:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:22.335 20:35:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:22.335 20:35:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:22.335 20:35:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:22.335 20:35:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:22.335 20:35:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:22.335 20:35:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:22.335 20:35:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:22.335 20:35:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:22.335 20:35:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:22.335 20:35:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.900 nvme0n1 00:26:22.900 20:35:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:22.900 20:35:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:22.900 20:35:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:22.900 20:35:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:22.900 20:35:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.900 20:35:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:22.900 20:35:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:22.900 20:35:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:22.900 20:35:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:22.900 20:35:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.900 20:35:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:22.900 20:35:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:22.900 20:35:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:26:22.900 20:35:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:22.900 20:35:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:22.900 20:35:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:22.900 20:35:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:22.900 20:35:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Y2I5NWQ0ODg4NjJjMDczYWY2MGU3MzcxYmIyNWEyMzEyMzIxNjlmYmNkNDM2ZjFhnNIknw==: 00:26:22.900 20:35:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGU2NzMwMTEzMjlkODgxNTFlNDM3MjQxZTQ3ZTY4MmZGEVRs: 00:26:22.900 20:35:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:22.900 20:35:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:22.900 20:35:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Y2I5NWQ0ODg4NjJjMDczYWY2MGU3MzcxYmIyNWEyMzEyMzIxNjlmYmNkNDM2ZjFhnNIknw==: 00:26:22.900 20:35:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGU2NzMwMTEzMjlkODgxNTFlNDM3MjQxZTQ3ZTY4MmZGEVRs: ]] 00:26:22.900 20:35:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGU2NzMwMTEzMjlkODgxNTFlNDM3MjQxZTQ3ZTY4MmZGEVRs: 00:26:22.900 20:35:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:26:22.900 20:35:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:22.900 20:35:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:22.900 20:35:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:22.900 20:35:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:22.900 20:35:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:22.900 20:35:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:22.900 20:35:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:22.900 20:35:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.900 20:35:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:22.900 20:35:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:22.900 20:35:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:22.900 20:35:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:22.900 20:35:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:22.900 20:35:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:22.900 20:35:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:22.900 20:35:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:22.900 20:35:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:22.900 20:35:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:22.900 20:35:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:22.900 20:35:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:22.900 20:35:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:22.900 20:35:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:22.900 20:35:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.464 nvme0n1 00:26:23.464 20:35:36 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:23.464 20:35:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:23.464 20:35:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:23.464 20:35:36 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:23.464 20:35:36 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.464 20:35:36 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:23.464 20:35:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:23.464 20:35:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:23.464 20:35:36 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:23.464 20:35:36 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.721 20:35:36 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:23.721 20:35:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:23.721 20:35:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:26:23.721 20:35:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:23.721 20:35:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:23.721 20:35:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:23.721 20:35:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:23.721 20:35:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODQ1MGE5OTQ1OWE4NDE0NWU1YTc1ZmI5NzJmMWU0NWMwOTdmNmU4ZmQwNDdiMDEyYjc4NzIwOWY4OGI1ZDM1OD/T/uM=: 00:26:23.721 20:35:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:23.721 20:35:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:23.721 20:35:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:23.721 20:35:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODQ1MGE5OTQ1OWE4NDE0NWU1YTc1ZmI5NzJmMWU0NWMwOTdmNmU4ZmQwNDdiMDEyYjc4NzIwOWY4OGI1ZDM1OD/T/uM=: 00:26:23.721 20:35:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:23.721 20:35:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:26:23.721 20:35:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:23.721 20:35:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:23.721 20:35:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:23.721 20:35:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:23.721 20:35:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:23.721 20:35:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:23.721 20:35:36 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:23.721 20:35:36 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.721 20:35:36 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:23.721 20:35:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:23.721 20:35:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:23.721 20:35:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:23.721 20:35:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:23.721 20:35:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:23.721 20:35:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:23.721 20:35:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:23.721 20:35:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:23.721 20:35:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:23.721 20:35:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:23.721 20:35:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:23.721 20:35:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:23.721 20:35:36 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:23.721 20:35:36 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.286 nvme0n1 00:26:24.286 20:35:37 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:24.286 20:35:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:24.286 20:35:37 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:24.286 20:35:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:24.286 20:35:37 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.286 20:35:37 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:24.286 20:35:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:24.286 20:35:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:24.286 20:35:37 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:24.286 20:35:37 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.286 20:35:37 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:24.286 20:35:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:26:24.286 20:35:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:24.286 20:35:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:24.286 20:35:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:26:24.286 20:35:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:24.286 20:35:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:24.286 20:35:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:24.286 20:35:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:24.286 20:35:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGQzOTQwNmUyZDFkNzIxYTRhOTJiYTNkOGIxYjM1NGWd8XXT: 00:26:24.286 20:35:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWNiMTdlM2ZmNzg4ZTI3MGNlNTJjZjk5NDg2Njg3ZDY2ZmZjYTIxMzk5ZWYyYjBjYjY1NmY5MjNmZDBiZmYyNI/Fg9o=: 00:26:24.286 20:35:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:24.286 20:35:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:24.286 20:35:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGQzOTQwNmUyZDFkNzIxYTRhOTJiYTNkOGIxYjM1NGWd8XXT: 00:26:24.286 20:35:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWNiMTdlM2ZmNzg4ZTI3MGNlNTJjZjk5NDg2Njg3ZDY2ZmZjYTIxMzk5ZWYyYjBjYjY1NmY5MjNmZDBiZmYyNI/Fg9o=: ]] 00:26:24.286 20:35:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWNiMTdlM2ZmNzg4ZTI3MGNlNTJjZjk5NDg2Njg3ZDY2ZmZjYTIxMzk5ZWYyYjBjYjY1NmY5MjNmZDBiZmYyNI/Fg9o=: 00:26:24.286 20:35:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:26:24.286 20:35:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:24.286 20:35:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:24.286 20:35:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:24.286 20:35:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:24.286 20:35:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:24.286 20:35:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:24.286 20:35:37 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:24.286 20:35:37 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.286 20:35:37 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:24.286 20:35:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:24.286 20:35:37 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:24.286 20:35:37 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:24.286 20:35:37 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:24.286 20:35:37 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:24.286 20:35:37 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:24.286 20:35:37 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:24.286 20:35:37 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:24.286 20:35:37 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:24.286 20:35:37 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:24.286 20:35:37 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:24.286 20:35:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:24.286 20:35:37 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:24.286 20:35:37 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.544 nvme0n1 00:26:24.544 20:35:37 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:24.544 20:35:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:24.544 20:35:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:24.544 20:35:37 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:24.544 20:35:37 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.544 20:35:37 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:24.544 20:35:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:24.544 20:35:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:24.544 20:35:37 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:24.544 20:35:37 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.544 20:35:37 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:24.544 20:35:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:24.544 20:35:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:26:24.544 20:35:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:24.544 20:35:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:24.544 20:35:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:24.544 20:35:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:24.544 20:35:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTA5Nzc2MjY5OTRjNTk5MWRiNjNlYjk2N2QwYWJhNTQwYTg5N2FiMGUyYzYyNmUxSCxk9Q==: 00:26:24.544 20:35:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWQ3MGQ5Yjk3Y2E4NzUzM2ExNjljNGQ0MzBmNDU4YzRkODMyYzE1NzEzYmEwNGZmapuQ+w==: 00:26:24.544 20:35:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:24.544 20:35:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:24.544 20:35:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTA5Nzc2MjY5OTRjNTk5MWRiNjNlYjk2N2QwYWJhNTQwYTg5N2FiMGUyYzYyNmUxSCxk9Q==: 00:26:24.544 20:35:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWQ3MGQ5Yjk3Y2E4NzUzM2ExNjljNGQ0MzBmNDU4YzRkODMyYzE1NzEzYmEwNGZmapuQ+w==: ]] 00:26:24.544 20:35:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWQ3MGQ5Yjk3Y2E4NzUzM2ExNjljNGQ0MzBmNDU4YzRkODMyYzE1NzEzYmEwNGZmapuQ+w==: 00:26:24.544 20:35:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:26:24.544 20:35:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:24.544 20:35:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:24.544 20:35:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:24.544 20:35:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:24.544 20:35:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:24.544 20:35:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:24.544 20:35:37 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:24.544 20:35:37 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.544 20:35:37 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:24.544 20:35:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:24.544 20:35:37 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:24.544 20:35:37 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:24.544 20:35:37 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:24.544 20:35:37 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:24.544 20:35:37 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:24.544 20:35:37 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:24.544 20:35:37 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:24.544 20:35:37 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:24.544 20:35:37 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:24.544 20:35:37 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:24.544 20:35:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:24.544 20:35:37 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:24.544 20:35:37 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.802 nvme0n1 00:26:24.802 20:35:37 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:24.802 20:35:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:24.802 20:35:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:24.802 20:35:37 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:24.802 20:35:37 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.802 20:35:37 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:24.802 20:35:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:24.802 20:35:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:24.802 20:35:37 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:24.802 20:35:37 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.802 20:35:37 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:24.802 20:35:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:24.802 20:35:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:26:24.802 20:35:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:24.802 20:35:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:24.802 20:35:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:24.802 20:35:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:24.802 20:35:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzYxZjFlY2IzZGE5OWIwNWMyNTk0MWQ3MzYzMTI2YmQRBV+K: 00:26:24.802 20:35:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OWFmZGZlMmY3YTgzNjA2MWNlOGEzODhmMTcwMWZlNWTDh5qi: 00:26:24.802 20:35:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:24.802 20:35:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:24.802 20:35:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzYxZjFlY2IzZGE5OWIwNWMyNTk0MWQ3MzYzMTI2YmQRBV+K: 00:26:24.802 20:35:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OWFmZGZlMmY3YTgzNjA2MWNlOGEzODhmMTcwMWZlNWTDh5qi: ]] 00:26:24.802 20:35:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OWFmZGZlMmY3YTgzNjA2MWNlOGEzODhmMTcwMWZlNWTDh5qi: 00:26:24.802 20:35:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:26:24.802 20:35:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:24.802 20:35:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:24.802 20:35:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:24.802 20:35:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:24.802 20:35:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:24.802 20:35:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:24.802 20:35:37 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:24.802 20:35:37 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.802 20:35:37 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:24.802 20:35:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:24.802 20:35:37 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:24.802 20:35:37 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:24.802 20:35:37 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:24.802 20:35:37 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:24.802 20:35:37 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:24.802 20:35:37 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:24.802 20:35:37 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:24.802 20:35:37 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:24.802 20:35:37 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:24.802 20:35:37 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:24.802 20:35:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:24.802 20:35:37 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:24.802 20:35:37 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:25.060 nvme0n1 00:26:25.060 20:35:37 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:25.060 20:35:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:25.060 20:35:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:25.060 20:35:37 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:25.060 20:35:37 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:25.060 20:35:37 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:25.060 20:35:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:25.060 20:35:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:25.060 20:35:38 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:25.060 20:35:38 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:25.060 20:35:38 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:25.060 20:35:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:25.060 20:35:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:26:25.060 20:35:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:25.060 20:35:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:25.060 20:35:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:25.060 20:35:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:25.060 20:35:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Y2I5NWQ0ODg4NjJjMDczYWY2MGU3MzcxYmIyNWEyMzEyMzIxNjlmYmNkNDM2ZjFhnNIknw==: 00:26:25.060 20:35:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGU2NzMwMTEzMjlkODgxNTFlNDM3MjQxZTQ3ZTY4MmZGEVRs: 00:26:25.060 20:35:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:25.060 20:35:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:25.060 20:35:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Y2I5NWQ0ODg4NjJjMDczYWY2MGU3MzcxYmIyNWEyMzEyMzIxNjlmYmNkNDM2ZjFhnNIknw==: 00:26:25.060 20:35:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGU2NzMwMTEzMjlkODgxNTFlNDM3MjQxZTQ3ZTY4MmZGEVRs: ]] 00:26:25.060 20:35:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGU2NzMwMTEzMjlkODgxNTFlNDM3MjQxZTQ3ZTY4MmZGEVRs: 00:26:25.060 20:35:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:26:25.060 20:35:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:25.060 20:35:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:25.060 20:35:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:25.060 20:35:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:25.060 20:35:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:25.060 20:35:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:25.060 20:35:38 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:25.060 20:35:38 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:25.318 20:35:38 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:25.318 20:35:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:25.318 20:35:38 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:25.318 20:35:38 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:25.318 20:35:38 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:25.318 20:35:38 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:25.318 20:35:38 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:25.318 20:35:38 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:25.318 20:35:38 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:25.318 20:35:38 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:25.318 20:35:38 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:25.318 20:35:38 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:25.318 20:35:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:25.318 20:35:38 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:25.318 20:35:38 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:25.318 nvme0n1 00:26:25.318 20:35:38 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:25.318 20:35:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:25.318 20:35:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:25.318 20:35:38 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:25.318 20:35:38 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:25.318 20:35:38 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:25.318 20:35:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:25.318 20:35:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:25.318 20:35:38 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:25.318 20:35:38 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:25.576 20:35:38 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:25.576 20:35:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:25.576 20:35:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:26:25.576 20:35:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:25.576 20:35:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:25.576 20:35:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:25.576 20:35:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:25.576 20:35:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODQ1MGE5OTQ1OWE4NDE0NWU1YTc1ZmI5NzJmMWU0NWMwOTdmNmU4ZmQwNDdiMDEyYjc4NzIwOWY4OGI1ZDM1OD/T/uM=: 00:26:25.576 20:35:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:25.576 20:35:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:25.576 20:35:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:25.576 20:35:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODQ1MGE5OTQ1OWE4NDE0NWU1YTc1ZmI5NzJmMWU0NWMwOTdmNmU4ZmQwNDdiMDEyYjc4NzIwOWY4OGI1ZDM1OD/T/uM=: 00:26:25.576 20:35:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:25.576 20:35:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:26:25.576 20:35:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:25.576 20:35:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:25.576 20:35:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:25.576 20:35:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:25.576 20:35:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:25.576 20:35:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:25.576 20:35:38 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:25.576 20:35:38 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:25.576 20:35:38 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:25.576 20:35:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:25.576 20:35:38 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:25.576 20:35:38 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:25.576 20:35:38 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:25.576 20:35:38 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:25.576 20:35:38 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:25.576 20:35:38 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:25.576 20:35:38 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:25.576 20:35:38 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:25.576 20:35:38 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:25.576 20:35:38 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:25.576 20:35:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:25.576 20:35:38 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:25.576 20:35:38 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:25.576 nvme0n1 00:26:25.576 20:35:38 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:25.576 20:35:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:25.576 20:35:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:25.576 20:35:38 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:25.576 20:35:38 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:25.576 20:35:38 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:25.834 20:35:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:25.834 20:35:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:25.834 20:35:38 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:25.834 20:35:38 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:25.834 20:35:38 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:25.834 20:35:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:25.834 20:35:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:25.834 20:35:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:26:25.834 20:35:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:25.834 20:35:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:25.834 20:35:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:25.834 20:35:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:25.834 20:35:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGQzOTQwNmUyZDFkNzIxYTRhOTJiYTNkOGIxYjM1NGWd8XXT: 00:26:25.834 20:35:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWNiMTdlM2ZmNzg4ZTI3MGNlNTJjZjk5NDg2Njg3ZDY2ZmZjYTIxMzk5ZWYyYjBjYjY1NmY5MjNmZDBiZmYyNI/Fg9o=: 00:26:25.834 20:35:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:25.834 20:35:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:25.834 20:35:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGQzOTQwNmUyZDFkNzIxYTRhOTJiYTNkOGIxYjM1NGWd8XXT: 00:26:25.834 20:35:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWNiMTdlM2ZmNzg4ZTI3MGNlNTJjZjk5NDg2Njg3ZDY2ZmZjYTIxMzk5ZWYyYjBjYjY1NmY5MjNmZDBiZmYyNI/Fg9o=: ]] 00:26:25.834 20:35:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWNiMTdlM2ZmNzg4ZTI3MGNlNTJjZjk5NDg2Njg3ZDY2ZmZjYTIxMzk5ZWYyYjBjYjY1NmY5MjNmZDBiZmYyNI/Fg9o=: 00:26:25.834 20:35:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:26:25.834 20:35:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:25.834 20:35:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:25.834 20:35:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:25.834 20:35:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:25.834 20:35:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:25.834 20:35:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:25.834 20:35:38 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:25.834 20:35:38 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:25.834 20:35:38 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:25.834 20:35:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:25.834 20:35:38 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:25.834 20:35:38 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:25.834 20:35:38 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:25.834 20:35:38 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:25.834 20:35:38 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:25.834 20:35:38 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:25.834 20:35:38 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:25.834 20:35:38 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:25.834 20:35:38 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:25.834 20:35:38 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:25.834 20:35:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:25.834 20:35:38 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:25.834 20:35:38 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.092 nvme0n1 00:26:26.092 20:35:38 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:26.092 20:35:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:26.092 20:35:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:26.092 20:35:38 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:26.092 20:35:38 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.092 20:35:38 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:26.092 20:35:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:26.092 20:35:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:26.092 20:35:38 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:26.092 20:35:38 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.092 20:35:38 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:26.092 20:35:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:26.092 20:35:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:26:26.092 20:35:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:26.092 20:35:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:26.092 20:35:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:26.092 20:35:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:26.092 20:35:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTA5Nzc2MjY5OTRjNTk5MWRiNjNlYjk2N2QwYWJhNTQwYTg5N2FiMGUyYzYyNmUxSCxk9Q==: 00:26:26.092 20:35:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWQ3MGQ5Yjk3Y2E4NzUzM2ExNjljNGQ0MzBmNDU4YzRkODMyYzE1NzEzYmEwNGZmapuQ+w==: 00:26:26.092 20:35:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:26.092 20:35:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:26.092 20:35:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTA5Nzc2MjY5OTRjNTk5MWRiNjNlYjk2N2QwYWJhNTQwYTg5N2FiMGUyYzYyNmUxSCxk9Q==: 00:26:26.092 20:35:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWQ3MGQ5Yjk3Y2E4NzUzM2ExNjljNGQ0MzBmNDU4YzRkODMyYzE1NzEzYmEwNGZmapuQ+w==: ]] 00:26:26.092 20:35:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWQ3MGQ5Yjk3Y2E4NzUzM2ExNjljNGQ0MzBmNDU4YzRkODMyYzE1NzEzYmEwNGZmapuQ+w==: 00:26:26.092 20:35:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:26:26.092 20:35:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:26.092 20:35:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:26.092 20:35:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:26.092 20:35:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:26.092 20:35:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:26.092 20:35:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:26.092 20:35:38 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:26.092 20:35:38 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.092 20:35:38 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:26.092 20:35:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:26.092 20:35:38 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:26.092 20:35:38 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:26.092 20:35:38 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:26.092 20:35:38 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:26.092 20:35:38 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:26.092 20:35:38 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:26.092 20:35:38 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:26.092 20:35:38 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:26.092 20:35:38 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:26.092 20:35:38 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:26.092 20:35:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:26.092 20:35:38 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:26.092 20:35:38 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.350 nvme0n1 00:26:26.350 20:35:39 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:26.350 20:35:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:26.350 20:35:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:26.350 20:35:39 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:26.350 20:35:39 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.350 20:35:39 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:26.350 20:35:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:26.350 20:35:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:26.350 20:35:39 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:26.350 20:35:39 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.350 20:35:39 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:26.350 20:35:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:26.350 20:35:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:26:26.350 20:35:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:26.350 20:35:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:26.350 20:35:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:26.350 20:35:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:26.350 20:35:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzYxZjFlY2IzZGE5OWIwNWMyNTk0MWQ3MzYzMTI2YmQRBV+K: 00:26:26.350 20:35:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OWFmZGZlMmY3YTgzNjA2MWNlOGEzODhmMTcwMWZlNWTDh5qi: 00:26:26.350 20:35:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:26.350 20:35:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:26.350 20:35:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzYxZjFlY2IzZGE5OWIwNWMyNTk0MWQ3MzYzMTI2YmQRBV+K: 00:26:26.350 20:35:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OWFmZGZlMmY3YTgzNjA2MWNlOGEzODhmMTcwMWZlNWTDh5qi: ]] 00:26:26.350 20:35:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OWFmZGZlMmY3YTgzNjA2MWNlOGEzODhmMTcwMWZlNWTDh5qi: 00:26:26.350 20:35:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:26:26.350 20:35:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:26.350 20:35:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:26.350 20:35:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:26.350 20:35:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:26.350 20:35:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:26.350 20:35:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:26.350 20:35:39 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:26.350 20:35:39 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.350 20:35:39 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:26.350 20:35:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:26.350 20:35:39 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:26.350 20:35:39 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:26.350 20:35:39 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:26.350 20:35:39 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:26.350 20:35:39 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:26.350 20:35:39 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:26.350 20:35:39 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:26.350 20:35:39 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:26.350 20:35:39 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:26.350 20:35:39 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:26.350 20:35:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:26.350 20:35:39 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:26.350 20:35:39 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.608 nvme0n1 00:26:26.608 20:35:39 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:26.608 20:35:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:26.608 20:35:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:26.608 20:35:39 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:26.608 20:35:39 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.608 20:35:39 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:26.608 20:35:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:26.608 20:35:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:26.608 20:35:39 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:26.608 20:35:39 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.608 20:35:39 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:26.608 20:35:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:26.608 20:35:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:26:26.608 20:35:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:26.608 20:35:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:26.608 20:35:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:26.608 20:35:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:26.608 20:35:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Y2I5NWQ0ODg4NjJjMDczYWY2MGU3MzcxYmIyNWEyMzEyMzIxNjlmYmNkNDM2ZjFhnNIknw==: 00:26:26.608 20:35:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGU2NzMwMTEzMjlkODgxNTFlNDM3MjQxZTQ3ZTY4MmZGEVRs: 00:26:26.608 20:35:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:26.608 20:35:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:26.608 20:35:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Y2I5NWQ0ODg4NjJjMDczYWY2MGU3MzcxYmIyNWEyMzEyMzIxNjlmYmNkNDM2ZjFhnNIknw==: 00:26:26.608 20:35:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGU2NzMwMTEzMjlkODgxNTFlNDM3MjQxZTQ3ZTY4MmZGEVRs: ]] 00:26:26.608 20:35:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGU2NzMwMTEzMjlkODgxNTFlNDM3MjQxZTQ3ZTY4MmZGEVRs: 00:26:26.608 20:35:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:26:26.608 20:35:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:26.608 20:35:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:26.608 20:35:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:26.608 20:35:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:26.608 20:35:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:26.608 20:35:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:26.608 20:35:39 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:26.608 20:35:39 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.608 20:35:39 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:26.608 20:35:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:26.608 20:35:39 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:26.608 20:35:39 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:26.608 20:35:39 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:26.608 20:35:39 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:26.608 20:35:39 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:26.608 20:35:39 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:26.608 20:35:39 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:26.608 20:35:39 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:26.608 20:35:39 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:26.608 20:35:39 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:26.608 20:35:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:26.865 20:35:39 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:26.866 20:35:39 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.866 nvme0n1 00:26:26.866 20:35:39 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:26.866 20:35:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:26.866 20:35:39 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:26.866 20:35:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:26.866 20:35:39 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.866 20:35:39 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:27.123 20:35:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:27.123 20:35:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:27.123 20:35:39 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:27.123 20:35:39 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.123 20:35:39 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:27.123 20:35:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:27.123 20:35:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:26:27.123 20:35:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:27.123 20:35:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:27.123 20:35:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:27.123 20:35:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:27.123 20:35:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODQ1MGE5OTQ1OWE4NDE0NWU1YTc1ZmI5NzJmMWU0NWMwOTdmNmU4ZmQwNDdiMDEyYjc4NzIwOWY4OGI1ZDM1OD/T/uM=: 00:26:27.123 20:35:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:27.123 20:35:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:27.123 20:35:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:27.123 20:35:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODQ1MGE5OTQ1OWE4NDE0NWU1YTc1ZmI5NzJmMWU0NWMwOTdmNmU4ZmQwNDdiMDEyYjc4NzIwOWY4OGI1ZDM1OD/T/uM=: 00:26:27.123 20:35:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:27.123 20:35:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:26:27.123 20:35:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:27.123 20:35:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:27.123 20:35:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:27.123 20:35:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:27.123 20:35:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:27.123 20:35:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:27.123 20:35:39 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:27.123 20:35:39 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.123 20:35:39 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:27.123 20:35:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:27.123 20:35:39 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:27.123 20:35:39 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:27.123 20:35:39 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:27.123 20:35:39 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:27.123 20:35:39 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:27.123 20:35:39 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:27.123 20:35:39 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:27.123 20:35:39 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:27.123 20:35:39 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:27.123 20:35:39 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:27.123 20:35:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:27.123 20:35:39 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:27.124 20:35:39 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.381 nvme0n1 00:26:27.381 20:35:40 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:27.381 20:35:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:27.381 20:35:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:27.381 20:35:40 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:27.381 20:35:40 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.381 20:35:40 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:27.381 20:35:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:27.381 20:35:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:27.381 20:35:40 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:27.381 20:35:40 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.381 20:35:40 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:27.381 20:35:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:27.381 20:35:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:27.381 20:35:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:26:27.381 20:35:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:27.381 20:35:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:27.381 20:35:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:27.381 20:35:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:27.381 20:35:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGQzOTQwNmUyZDFkNzIxYTRhOTJiYTNkOGIxYjM1NGWd8XXT: 00:26:27.381 20:35:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWNiMTdlM2ZmNzg4ZTI3MGNlNTJjZjk5NDg2Njg3ZDY2ZmZjYTIxMzk5ZWYyYjBjYjY1NmY5MjNmZDBiZmYyNI/Fg9o=: 00:26:27.381 20:35:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:27.381 20:35:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:27.381 20:35:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGQzOTQwNmUyZDFkNzIxYTRhOTJiYTNkOGIxYjM1NGWd8XXT: 00:26:27.381 20:35:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWNiMTdlM2ZmNzg4ZTI3MGNlNTJjZjk5NDg2Njg3ZDY2ZmZjYTIxMzk5ZWYyYjBjYjY1NmY5MjNmZDBiZmYyNI/Fg9o=: ]] 00:26:27.381 20:35:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWNiMTdlM2ZmNzg4ZTI3MGNlNTJjZjk5NDg2Njg3ZDY2ZmZjYTIxMzk5ZWYyYjBjYjY1NmY5MjNmZDBiZmYyNI/Fg9o=: 00:26:27.381 20:35:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:26:27.381 20:35:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:27.382 20:35:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:27.382 20:35:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:27.382 20:35:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:27.382 20:35:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:27.382 20:35:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:27.382 20:35:40 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:27.382 20:35:40 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.382 20:35:40 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:27.382 20:35:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:27.382 20:35:40 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:27.382 20:35:40 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:27.382 20:35:40 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:27.382 20:35:40 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:27.382 20:35:40 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:27.382 20:35:40 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:27.382 20:35:40 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:27.382 20:35:40 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:27.382 20:35:40 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:27.382 20:35:40 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:27.382 20:35:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:27.382 20:35:40 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:27.382 20:35:40 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.639 nvme0n1 00:26:27.639 20:35:40 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:27.639 20:35:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:27.639 20:35:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:27.639 20:35:40 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:27.639 20:35:40 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.639 20:35:40 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:27.639 20:35:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:27.639 20:35:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:27.639 20:35:40 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:27.639 20:35:40 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.639 20:35:40 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:27.639 20:35:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:27.639 20:35:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:26:27.639 20:35:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:27.639 20:35:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:27.639 20:35:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:27.639 20:35:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:27.639 20:35:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTA5Nzc2MjY5OTRjNTk5MWRiNjNlYjk2N2QwYWJhNTQwYTg5N2FiMGUyYzYyNmUxSCxk9Q==: 00:26:27.640 20:35:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWQ3MGQ5Yjk3Y2E4NzUzM2ExNjljNGQ0MzBmNDU4YzRkODMyYzE1NzEzYmEwNGZmapuQ+w==: 00:26:27.640 20:35:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:27.640 20:35:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:27.640 20:35:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTA5Nzc2MjY5OTRjNTk5MWRiNjNlYjk2N2QwYWJhNTQwYTg5N2FiMGUyYzYyNmUxSCxk9Q==: 00:26:27.640 20:35:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWQ3MGQ5Yjk3Y2E4NzUzM2ExNjljNGQ0MzBmNDU4YzRkODMyYzE1NzEzYmEwNGZmapuQ+w==: ]] 00:26:27.640 20:35:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWQ3MGQ5Yjk3Y2E4NzUzM2ExNjljNGQ0MzBmNDU4YzRkODMyYzE1NzEzYmEwNGZmapuQ+w==: 00:26:27.640 20:35:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:26:27.640 20:35:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:27.640 20:35:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:27.640 20:35:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:27.640 20:35:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:27.640 20:35:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:27.640 20:35:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:27.640 20:35:40 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:27.640 20:35:40 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.640 20:35:40 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:27.640 20:35:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:27.640 20:35:40 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:27.640 20:35:40 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:27.640 20:35:40 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:27.640 20:35:40 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:27.640 20:35:40 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:27.640 20:35:40 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:27.640 20:35:40 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:27.640 20:35:40 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:27.640 20:35:40 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:27.640 20:35:40 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:27.640 20:35:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:27.640 20:35:40 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:27.640 20:35:40 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.205 nvme0n1 00:26:28.205 20:35:40 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:28.205 20:35:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:28.205 20:35:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:28.205 20:35:40 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:28.205 20:35:40 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.205 20:35:40 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:28.205 20:35:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:28.205 20:35:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:28.205 20:35:40 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:28.205 20:35:40 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.205 20:35:40 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:28.205 20:35:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:28.205 20:35:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:26:28.205 20:35:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:28.205 20:35:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:28.205 20:35:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:28.205 20:35:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:28.205 20:35:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzYxZjFlY2IzZGE5OWIwNWMyNTk0MWQ3MzYzMTI2YmQRBV+K: 00:26:28.205 20:35:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OWFmZGZlMmY3YTgzNjA2MWNlOGEzODhmMTcwMWZlNWTDh5qi: 00:26:28.205 20:35:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:28.205 20:35:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:28.205 20:35:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzYxZjFlY2IzZGE5OWIwNWMyNTk0MWQ3MzYzMTI2YmQRBV+K: 00:26:28.205 20:35:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OWFmZGZlMmY3YTgzNjA2MWNlOGEzODhmMTcwMWZlNWTDh5qi: ]] 00:26:28.205 20:35:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OWFmZGZlMmY3YTgzNjA2MWNlOGEzODhmMTcwMWZlNWTDh5qi: 00:26:28.205 20:35:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:26:28.205 20:35:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:28.205 20:35:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:28.205 20:35:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:28.205 20:35:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:28.205 20:35:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:28.205 20:35:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:28.205 20:35:40 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:28.205 20:35:40 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.205 20:35:40 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:28.205 20:35:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:28.205 20:35:40 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:28.205 20:35:40 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:28.205 20:35:40 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:28.205 20:35:40 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:28.205 20:35:40 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:28.205 20:35:40 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:28.205 20:35:40 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:28.205 20:35:40 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:28.205 20:35:40 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:28.205 20:35:40 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:28.205 20:35:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:28.205 20:35:40 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:28.205 20:35:40 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.463 nvme0n1 00:26:28.463 20:35:41 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:28.463 20:35:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:28.463 20:35:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:28.463 20:35:41 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:28.463 20:35:41 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.463 20:35:41 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:28.463 20:35:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:28.463 20:35:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:28.463 20:35:41 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:28.463 20:35:41 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.463 20:35:41 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:28.463 20:35:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:28.463 20:35:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:26:28.463 20:35:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:28.463 20:35:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:28.463 20:35:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:28.463 20:35:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:28.464 20:35:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Y2I5NWQ0ODg4NjJjMDczYWY2MGU3MzcxYmIyNWEyMzEyMzIxNjlmYmNkNDM2ZjFhnNIknw==: 00:26:28.464 20:35:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGU2NzMwMTEzMjlkODgxNTFlNDM3MjQxZTQ3ZTY4MmZGEVRs: 00:26:28.464 20:35:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:28.464 20:35:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:28.464 20:35:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Y2I5NWQ0ODg4NjJjMDczYWY2MGU3MzcxYmIyNWEyMzEyMzIxNjlmYmNkNDM2ZjFhnNIknw==: 00:26:28.464 20:35:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGU2NzMwMTEzMjlkODgxNTFlNDM3MjQxZTQ3ZTY4MmZGEVRs: ]] 00:26:28.464 20:35:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGU2NzMwMTEzMjlkODgxNTFlNDM3MjQxZTQ3ZTY4MmZGEVRs: 00:26:28.464 20:35:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:26:28.464 20:35:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:28.464 20:35:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:28.464 20:35:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:28.464 20:35:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:28.464 20:35:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:28.464 20:35:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:28.464 20:35:41 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:28.464 20:35:41 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.464 20:35:41 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:28.464 20:35:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:28.464 20:35:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:28.464 20:35:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:28.464 20:35:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:28.464 20:35:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:28.464 20:35:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:28.464 20:35:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:28.464 20:35:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:28.464 20:35:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:28.464 20:35:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:28.464 20:35:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:28.464 20:35:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:28.464 20:35:41 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:28.464 20:35:41 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.721 nvme0n1 00:26:28.721 20:35:41 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:28.722 20:35:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:28.722 20:35:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:28.722 20:35:41 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:28.722 20:35:41 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.722 20:35:41 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:28.979 20:35:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:28.979 20:35:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:28.980 20:35:41 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:28.980 20:35:41 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.980 20:35:41 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:28.980 20:35:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:28.980 20:35:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:26:28.980 20:35:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:28.980 20:35:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:28.980 20:35:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:28.980 20:35:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:28.980 20:35:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODQ1MGE5OTQ1OWE4NDE0NWU1YTc1ZmI5NzJmMWU0NWMwOTdmNmU4ZmQwNDdiMDEyYjc4NzIwOWY4OGI1ZDM1OD/T/uM=: 00:26:28.980 20:35:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:28.980 20:35:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:28.980 20:35:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:28.980 20:35:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODQ1MGE5OTQ1OWE4NDE0NWU1YTc1ZmI5NzJmMWU0NWMwOTdmNmU4ZmQwNDdiMDEyYjc4NzIwOWY4OGI1ZDM1OD/T/uM=: 00:26:28.980 20:35:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:28.980 20:35:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:26:28.980 20:35:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:28.980 20:35:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:28.980 20:35:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:28.980 20:35:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:28.980 20:35:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:28.980 20:35:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:28.980 20:35:41 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:28.980 20:35:41 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.980 20:35:41 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:28.980 20:35:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:28.980 20:35:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:28.980 20:35:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:28.980 20:35:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:28.980 20:35:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:28.980 20:35:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:28.980 20:35:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:28.980 20:35:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:28.980 20:35:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:28.980 20:35:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:28.980 20:35:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:28.980 20:35:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:28.980 20:35:41 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:28.980 20:35:41 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.238 nvme0n1 00:26:29.238 20:35:42 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:29.238 20:35:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:29.238 20:35:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:29.238 20:35:42 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.238 20:35:42 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.238 20:35:42 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:29.238 20:35:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:29.238 20:35:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:29.238 20:35:42 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.238 20:35:42 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.238 20:35:42 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:29.238 20:35:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:29.238 20:35:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:29.238 20:35:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:26:29.238 20:35:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:29.238 20:35:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:29.238 20:35:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:29.238 20:35:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:29.238 20:35:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGQzOTQwNmUyZDFkNzIxYTRhOTJiYTNkOGIxYjM1NGWd8XXT: 00:26:29.238 20:35:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWNiMTdlM2ZmNzg4ZTI3MGNlNTJjZjk5NDg2Njg3ZDY2ZmZjYTIxMzk5ZWYyYjBjYjY1NmY5MjNmZDBiZmYyNI/Fg9o=: 00:26:29.238 20:35:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:29.238 20:35:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:29.238 20:35:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGQzOTQwNmUyZDFkNzIxYTRhOTJiYTNkOGIxYjM1NGWd8XXT: 00:26:29.238 20:35:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWNiMTdlM2ZmNzg4ZTI3MGNlNTJjZjk5NDg2Njg3ZDY2ZmZjYTIxMzk5ZWYyYjBjYjY1NmY5MjNmZDBiZmYyNI/Fg9o=: ]] 00:26:29.238 20:35:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWNiMTdlM2ZmNzg4ZTI3MGNlNTJjZjk5NDg2Njg3ZDY2ZmZjYTIxMzk5ZWYyYjBjYjY1NmY5MjNmZDBiZmYyNI/Fg9o=: 00:26:29.238 20:35:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:26:29.238 20:35:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:29.238 20:35:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:29.238 20:35:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:29.238 20:35:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:29.238 20:35:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:29.238 20:35:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:29.238 20:35:42 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.238 20:35:42 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.238 20:35:42 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:29.238 20:35:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:29.238 20:35:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:29.238 20:35:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:29.238 20:35:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:29.238 20:35:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:29.238 20:35:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:29.238 20:35:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:29.238 20:35:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:29.238 20:35:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:29.238 20:35:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:29.238 20:35:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:29.238 20:35:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:29.238 20:35:42 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.238 20:35:42 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.804 nvme0n1 00:26:29.804 20:35:42 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:29.804 20:35:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:29.804 20:35:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:29.804 20:35:42 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.804 20:35:42 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.804 20:35:42 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:29.804 20:35:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:29.804 20:35:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:29.804 20:35:42 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.804 20:35:42 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.804 20:35:42 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:29.804 20:35:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:29.804 20:35:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:26:29.804 20:35:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:29.804 20:35:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:29.804 20:35:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:29.804 20:35:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:29.804 20:35:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTA5Nzc2MjY5OTRjNTk5MWRiNjNlYjk2N2QwYWJhNTQwYTg5N2FiMGUyYzYyNmUxSCxk9Q==: 00:26:29.804 20:35:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWQ3MGQ5Yjk3Y2E4NzUzM2ExNjljNGQ0MzBmNDU4YzRkODMyYzE1NzEzYmEwNGZmapuQ+w==: 00:26:29.804 20:35:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:29.804 20:35:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:29.804 20:35:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTA5Nzc2MjY5OTRjNTk5MWRiNjNlYjk2N2QwYWJhNTQwYTg5N2FiMGUyYzYyNmUxSCxk9Q==: 00:26:29.804 20:35:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWQ3MGQ5Yjk3Y2E4NzUzM2ExNjljNGQ0MzBmNDU4YzRkODMyYzE1NzEzYmEwNGZmapuQ+w==: ]] 00:26:29.804 20:35:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWQ3MGQ5Yjk3Y2E4NzUzM2ExNjljNGQ0MzBmNDU4YzRkODMyYzE1NzEzYmEwNGZmapuQ+w==: 00:26:29.804 20:35:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:26:29.804 20:35:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:29.804 20:35:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:29.804 20:35:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:29.804 20:35:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:29.804 20:35:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:29.804 20:35:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:29.804 20:35:42 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.804 20:35:42 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.804 20:35:42 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:29.804 20:35:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:29.804 20:35:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:29.804 20:35:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:29.804 20:35:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:29.804 20:35:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:29.804 20:35:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:29.804 20:35:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:29.804 20:35:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:29.804 20:35:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:29.804 20:35:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:29.804 20:35:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:29.804 20:35:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:29.804 20:35:42 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.804 20:35:42 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.370 nvme0n1 00:26:30.370 20:35:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:30.370 20:35:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:30.370 20:35:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:30.370 20:35:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:30.370 20:35:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.370 20:35:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:30.370 20:35:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:30.370 20:35:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:30.370 20:35:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:30.370 20:35:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.370 20:35:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:30.370 20:35:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:30.370 20:35:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:26:30.370 20:35:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:30.370 20:35:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:30.370 20:35:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:30.370 20:35:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:30.370 20:35:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzYxZjFlY2IzZGE5OWIwNWMyNTk0MWQ3MzYzMTI2YmQRBV+K: 00:26:30.370 20:35:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OWFmZGZlMmY3YTgzNjA2MWNlOGEzODhmMTcwMWZlNWTDh5qi: 00:26:30.370 20:35:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:30.370 20:35:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:30.370 20:35:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzYxZjFlY2IzZGE5OWIwNWMyNTk0MWQ3MzYzMTI2YmQRBV+K: 00:26:30.370 20:35:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OWFmZGZlMmY3YTgzNjA2MWNlOGEzODhmMTcwMWZlNWTDh5qi: ]] 00:26:30.370 20:35:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OWFmZGZlMmY3YTgzNjA2MWNlOGEzODhmMTcwMWZlNWTDh5qi: 00:26:30.370 20:35:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:26:30.370 20:35:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:30.370 20:35:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:30.370 20:35:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:30.370 20:35:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:30.370 20:35:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:30.370 20:35:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:30.370 20:35:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:30.370 20:35:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.370 20:35:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:30.370 20:35:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:30.370 20:35:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:30.370 20:35:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:30.370 20:35:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:30.370 20:35:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:30.370 20:35:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:30.370 20:35:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:30.370 20:35:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:30.370 20:35:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:30.370 20:35:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:30.370 20:35:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:30.370 20:35:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:30.370 20:35:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:30.370 20:35:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.627 nvme0n1 00:26:30.628 20:35:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:30.628 20:35:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:30.628 20:35:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:30.628 20:35:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:30.628 20:35:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.628 20:35:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:30.628 20:35:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:30.628 20:35:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:30.628 20:35:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:30.628 20:35:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.885 20:35:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:30.885 20:35:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:30.885 20:35:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:26:30.885 20:35:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:30.885 20:35:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:30.885 20:35:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:30.885 20:35:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:30.885 20:35:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Y2I5NWQ0ODg4NjJjMDczYWY2MGU3MzcxYmIyNWEyMzEyMzIxNjlmYmNkNDM2ZjFhnNIknw==: 00:26:30.885 20:35:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGU2NzMwMTEzMjlkODgxNTFlNDM3MjQxZTQ3ZTY4MmZGEVRs: 00:26:30.885 20:35:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:30.885 20:35:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:30.885 20:35:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Y2I5NWQ0ODg4NjJjMDczYWY2MGU3MzcxYmIyNWEyMzEyMzIxNjlmYmNkNDM2ZjFhnNIknw==: 00:26:30.885 20:35:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGU2NzMwMTEzMjlkODgxNTFlNDM3MjQxZTQ3ZTY4MmZGEVRs: ]] 00:26:30.885 20:35:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGU2NzMwMTEzMjlkODgxNTFlNDM3MjQxZTQ3ZTY4MmZGEVRs: 00:26:30.885 20:35:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:26:30.885 20:35:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:30.885 20:35:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:30.885 20:35:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:30.885 20:35:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:30.885 20:35:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:30.885 20:35:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:30.885 20:35:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:30.885 20:35:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.885 20:35:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:30.885 20:35:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:30.885 20:35:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:30.885 20:35:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:30.885 20:35:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:30.885 20:35:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:30.885 20:35:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:30.885 20:35:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:30.885 20:35:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:30.885 20:35:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:30.885 20:35:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:30.885 20:35:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:30.885 20:35:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:30.885 20:35:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:30.885 20:35:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.142 nvme0n1 00:26:31.142 20:35:44 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:31.142 20:35:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:31.142 20:35:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:31.142 20:35:44 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:31.142 20:35:44 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.142 20:35:44 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:31.142 20:35:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:31.142 20:35:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:31.142 20:35:44 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:31.142 20:35:44 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.399 20:35:44 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:31.399 20:35:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:31.399 20:35:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:26:31.399 20:35:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:31.399 20:35:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:31.399 20:35:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:31.399 20:35:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:31.399 20:35:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODQ1MGE5OTQ1OWE4NDE0NWU1YTc1ZmI5NzJmMWU0NWMwOTdmNmU4ZmQwNDdiMDEyYjc4NzIwOWY4OGI1ZDM1OD/T/uM=: 00:26:31.399 20:35:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:31.399 20:35:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:31.399 20:35:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:31.399 20:35:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODQ1MGE5OTQ1OWE4NDE0NWU1YTc1ZmI5NzJmMWU0NWMwOTdmNmU4ZmQwNDdiMDEyYjc4NzIwOWY4OGI1ZDM1OD/T/uM=: 00:26:31.399 20:35:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:31.399 20:35:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:26:31.399 20:35:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:31.399 20:35:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:31.399 20:35:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:31.399 20:35:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:31.399 20:35:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:31.399 20:35:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:31.399 20:35:44 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:31.399 20:35:44 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.399 20:35:44 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:31.399 20:35:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:31.399 20:35:44 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:31.399 20:35:44 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:31.399 20:35:44 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:31.399 20:35:44 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:31.399 20:35:44 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:31.399 20:35:44 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:31.399 20:35:44 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:31.399 20:35:44 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:31.399 20:35:44 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:31.399 20:35:44 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:31.399 20:35:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:31.399 20:35:44 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:31.399 20:35:44 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.657 nvme0n1 00:26:31.657 20:35:44 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:31.657 20:35:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:31.657 20:35:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:31.657 20:35:44 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:31.657 20:35:44 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.657 20:35:44 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:31.657 20:35:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:31.657 20:35:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:31.657 20:35:44 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:31.657 20:35:44 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.914 20:35:44 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:31.914 20:35:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:31.914 20:35:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:31.914 20:35:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:26:31.914 20:35:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:31.914 20:35:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:31.914 20:35:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:31.914 20:35:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:31.914 20:35:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGQzOTQwNmUyZDFkNzIxYTRhOTJiYTNkOGIxYjM1NGWd8XXT: 00:26:31.914 20:35:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWNiMTdlM2ZmNzg4ZTI3MGNlNTJjZjk5NDg2Njg3ZDY2ZmZjYTIxMzk5ZWYyYjBjYjY1NmY5MjNmZDBiZmYyNI/Fg9o=: 00:26:31.914 20:35:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:31.914 20:35:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:31.914 20:35:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGQzOTQwNmUyZDFkNzIxYTRhOTJiYTNkOGIxYjM1NGWd8XXT: 00:26:31.914 20:35:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWNiMTdlM2ZmNzg4ZTI3MGNlNTJjZjk5NDg2Njg3ZDY2ZmZjYTIxMzk5ZWYyYjBjYjY1NmY5MjNmZDBiZmYyNI/Fg9o=: ]] 00:26:31.914 20:35:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWNiMTdlM2ZmNzg4ZTI3MGNlNTJjZjk5NDg2Njg3ZDY2ZmZjYTIxMzk5ZWYyYjBjYjY1NmY5MjNmZDBiZmYyNI/Fg9o=: 00:26:31.914 20:35:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:26:31.914 20:35:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:31.914 20:35:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:31.914 20:35:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:31.914 20:35:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:31.914 20:35:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:31.914 20:35:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:31.914 20:35:44 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:31.914 20:35:44 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.914 20:35:44 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:31.914 20:35:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:31.914 20:35:44 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:31.914 20:35:44 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:31.914 20:35:44 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:31.914 20:35:44 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:31.914 20:35:44 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:31.914 20:35:44 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:31.914 20:35:44 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:31.914 20:35:44 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:31.914 20:35:44 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:31.914 20:35:44 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:31.914 20:35:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:31.914 20:35:44 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:31.914 20:35:44 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.479 nvme0n1 00:26:32.479 20:35:45 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:32.479 20:35:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:32.479 20:35:45 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:32.479 20:35:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:32.479 20:35:45 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.479 20:35:45 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:32.479 20:35:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:32.479 20:35:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:32.479 20:35:45 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:32.479 20:35:45 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.479 20:35:45 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:32.479 20:35:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:32.479 20:35:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:26:32.479 20:35:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:32.479 20:35:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:32.479 20:35:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:32.479 20:35:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:32.479 20:35:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTA5Nzc2MjY5OTRjNTk5MWRiNjNlYjk2N2QwYWJhNTQwYTg5N2FiMGUyYzYyNmUxSCxk9Q==: 00:26:32.479 20:35:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWQ3MGQ5Yjk3Y2E4NzUzM2ExNjljNGQ0MzBmNDU4YzRkODMyYzE1NzEzYmEwNGZmapuQ+w==: 00:26:32.479 20:35:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:32.479 20:35:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:32.479 20:35:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTA5Nzc2MjY5OTRjNTk5MWRiNjNlYjk2N2QwYWJhNTQwYTg5N2FiMGUyYzYyNmUxSCxk9Q==: 00:26:32.479 20:35:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWQ3MGQ5Yjk3Y2E4NzUzM2ExNjljNGQ0MzBmNDU4YzRkODMyYzE1NzEzYmEwNGZmapuQ+w==: ]] 00:26:32.479 20:35:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWQ3MGQ5Yjk3Y2E4NzUzM2ExNjljNGQ0MzBmNDU4YzRkODMyYzE1NzEzYmEwNGZmapuQ+w==: 00:26:32.479 20:35:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:26:32.479 20:35:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:32.479 20:35:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:32.479 20:35:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:32.479 20:35:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:32.479 20:35:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:32.479 20:35:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:32.479 20:35:45 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:32.479 20:35:45 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.479 20:35:45 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:32.479 20:35:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:32.479 20:35:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:32.479 20:35:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:32.479 20:35:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:32.479 20:35:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:32.479 20:35:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:32.479 20:35:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:32.479 20:35:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:32.479 20:35:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:32.479 20:35:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:32.479 20:35:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:32.479 20:35:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:32.479 20:35:45 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:32.479 20:35:45 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.044 nvme0n1 00:26:33.044 20:35:45 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:33.044 20:35:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:33.044 20:35:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:33.044 20:35:45 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:33.044 20:35:45 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.044 20:35:46 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:33.044 20:35:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:33.044 20:35:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:33.044 20:35:46 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:33.044 20:35:46 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.302 20:35:46 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:33.302 20:35:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:33.302 20:35:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:26:33.302 20:35:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:33.302 20:35:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:33.302 20:35:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:33.302 20:35:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:33.302 20:35:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzYxZjFlY2IzZGE5OWIwNWMyNTk0MWQ3MzYzMTI2YmQRBV+K: 00:26:33.302 20:35:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OWFmZGZlMmY3YTgzNjA2MWNlOGEzODhmMTcwMWZlNWTDh5qi: 00:26:33.302 20:35:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:33.302 20:35:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:33.302 20:35:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzYxZjFlY2IzZGE5OWIwNWMyNTk0MWQ3MzYzMTI2YmQRBV+K: 00:26:33.302 20:35:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OWFmZGZlMmY3YTgzNjA2MWNlOGEzODhmMTcwMWZlNWTDh5qi: ]] 00:26:33.302 20:35:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OWFmZGZlMmY3YTgzNjA2MWNlOGEzODhmMTcwMWZlNWTDh5qi: 00:26:33.302 20:35:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:26:33.302 20:35:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:33.302 20:35:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:33.302 20:35:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:33.302 20:35:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:33.302 20:35:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:33.302 20:35:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:33.302 20:35:46 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:33.302 20:35:46 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.302 20:35:46 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:33.302 20:35:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:33.302 20:35:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:33.302 20:35:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:33.302 20:35:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:33.302 20:35:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:33.302 20:35:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:33.303 20:35:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:33.303 20:35:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:33.303 20:35:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:33.303 20:35:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:33.303 20:35:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:33.303 20:35:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:33.303 20:35:46 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:33.303 20:35:46 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.868 nvme0n1 00:26:33.868 20:35:46 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:33.868 20:35:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:33.868 20:35:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:33.868 20:35:46 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:33.868 20:35:46 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.868 20:35:46 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:33.868 20:35:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:33.868 20:35:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:33.868 20:35:46 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:33.868 20:35:46 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.868 20:35:46 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:33.868 20:35:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:33.868 20:35:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:26:33.868 20:35:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:33.868 20:35:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:33.868 20:35:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:33.868 20:35:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:33.868 20:35:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Y2I5NWQ0ODg4NjJjMDczYWY2MGU3MzcxYmIyNWEyMzEyMzIxNjlmYmNkNDM2ZjFhnNIknw==: 00:26:33.868 20:35:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGU2NzMwMTEzMjlkODgxNTFlNDM3MjQxZTQ3ZTY4MmZGEVRs: 00:26:33.868 20:35:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:33.868 20:35:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:33.868 20:35:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Y2I5NWQ0ODg4NjJjMDczYWY2MGU3MzcxYmIyNWEyMzEyMzIxNjlmYmNkNDM2ZjFhnNIknw==: 00:26:33.868 20:35:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGU2NzMwMTEzMjlkODgxNTFlNDM3MjQxZTQ3ZTY4MmZGEVRs: ]] 00:26:33.868 20:35:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGU2NzMwMTEzMjlkODgxNTFlNDM3MjQxZTQ3ZTY4MmZGEVRs: 00:26:33.868 20:35:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:26:33.868 20:35:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:33.868 20:35:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:33.868 20:35:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:33.868 20:35:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:33.868 20:35:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:33.868 20:35:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:33.868 20:35:46 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:33.868 20:35:46 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.868 20:35:46 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:33.868 20:35:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:33.868 20:35:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:33.868 20:35:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:33.868 20:35:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:33.868 20:35:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:33.868 20:35:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:33.868 20:35:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:33.868 20:35:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:33.868 20:35:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:33.868 20:35:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:33.868 20:35:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:33.868 20:35:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:33.868 20:35:46 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:33.868 20:35:46 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.800 nvme0n1 00:26:34.800 20:35:47 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:34.800 20:35:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:34.800 20:35:47 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:34.800 20:35:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:34.800 20:35:47 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.800 20:35:47 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:34.800 20:35:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:34.800 20:35:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:34.800 20:35:47 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:34.801 20:35:47 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.801 20:35:47 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:34.801 20:35:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:34.801 20:35:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:26:34.801 20:35:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:34.801 20:35:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:34.801 20:35:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:34.801 20:35:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:34.801 20:35:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODQ1MGE5OTQ1OWE4NDE0NWU1YTc1ZmI5NzJmMWU0NWMwOTdmNmU4ZmQwNDdiMDEyYjc4NzIwOWY4OGI1ZDM1OD/T/uM=: 00:26:34.801 20:35:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:34.801 20:35:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:34.801 20:35:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:34.801 20:35:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODQ1MGE5OTQ1OWE4NDE0NWU1YTc1ZmI5NzJmMWU0NWMwOTdmNmU4ZmQwNDdiMDEyYjc4NzIwOWY4OGI1ZDM1OD/T/uM=: 00:26:34.801 20:35:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:34.801 20:35:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:26:34.801 20:35:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:34.801 20:35:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:34.801 20:35:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:34.801 20:35:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:34.801 20:35:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:34.801 20:35:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:34.801 20:35:47 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:34.801 20:35:47 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.801 20:35:47 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:34.801 20:35:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:34.801 20:35:47 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:34.801 20:35:47 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:34.801 20:35:47 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:34.801 20:35:47 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:34.801 20:35:47 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:34.801 20:35:47 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:34.801 20:35:47 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:34.801 20:35:47 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:34.801 20:35:47 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:34.801 20:35:47 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:34.801 20:35:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:34.801 20:35:47 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:34.801 20:35:47 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.367 nvme0n1 00:26:35.367 20:35:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:35.367 20:35:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:35.367 20:35:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:35.367 20:35:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:35.367 20:35:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.367 20:35:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:35.367 20:35:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:35.367 20:35:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:35.367 20:35:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:35.367 20:35:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.367 20:35:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:35.367 20:35:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:26:35.367 20:35:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:35.367 20:35:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:35.367 20:35:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:35.367 20:35:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:35.367 20:35:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTA5Nzc2MjY5OTRjNTk5MWRiNjNlYjk2N2QwYWJhNTQwYTg5N2FiMGUyYzYyNmUxSCxk9Q==: 00:26:35.367 20:35:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWQ3MGQ5Yjk3Y2E4NzUzM2ExNjljNGQ0MzBmNDU4YzRkODMyYzE1NzEzYmEwNGZmapuQ+w==: 00:26:35.367 20:35:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:35.367 20:35:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:35.367 20:35:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTA5Nzc2MjY5OTRjNTk5MWRiNjNlYjk2N2QwYWJhNTQwYTg5N2FiMGUyYzYyNmUxSCxk9Q==: 00:26:35.367 20:35:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWQ3MGQ5Yjk3Y2E4NzUzM2ExNjljNGQ0MzBmNDU4YzRkODMyYzE1NzEzYmEwNGZmapuQ+w==: ]] 00:26:35.367 20:35:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWQ3MGQ5Yjk3Y2E4NzUzM2ExNjljNGQ0MzBmNDU4YzRkODMyYzE1NzEzYmEwNGZmapuQ+w==: 00:26:35.367 20:35:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:35.367 20:35:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:35.367 20:35:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.367 20:35:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:35.367 20:35:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:26:35.367 20:35:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:35.367 20:35:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:35.367 20:35:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:35.367 20:35:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:35.367 20:35:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:35.367 20:35:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:35.367 20:35:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:35.367 20:35:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:35.367 20:35:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:35.367 20:35:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:35.367 20:35:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:26:35.367 20:35:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:26:35.367 20:35:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:26:35.367 20:35:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:26:35.367 20:35:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:35.367 20:35:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:26:35.367 20:35:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:35.367 20:35:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:26:35.367 20:35:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:35.367 20:35:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.367 request: 00:26:35.367 { 00:26:35.367 "name": "nvme0", 00:26:35.367 "trtype": "rdma", 00:26:35.367 "traddr": "192.168.100.8", 00:26:35.367 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:26:35.367 "adrfam": "ipv4", 00:26:35.367 "trsvcid": "4420", 00:26:35.367 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:26:35.367 "method": "bdev_nvme_attach_controller", 00:26:35.367 "req_id": 1 00:26:35.367 } 00:26:35.367 Got JSON-RPC error response 00:26:35.367 response: 00:26:35.367 { 00:26:35.367 "code": -32602, 00:26:35.368 "message": "Invalid parameters" 00:26:35.368 } 00:26:35.368 20:35:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:26:35.368 20:35:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:26:35.368 20:35:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:26:35.368 20:35:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:26:35.368 20:35:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:26:35.368 20:35:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:26:35.368 20:35:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:26:35.368 20:35:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:35.368 20:35:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.368 20:35:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:35.626 20:35:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:26:35.626 20:35:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:26:35.626 20:35:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:35.626 20:35:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:35.626 20:35:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:35.626 20:35:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:35.626 20:35:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:35.626 20:35:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:35.626 20:35:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:35.626 20:35:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:35.626 20:35:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:35.626 20:35:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:35.626 20:35:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:26:35.626 20:35:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:26:35.626 20:35:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:26:35.626 20:35:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:26:35.626 20:35:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:35.626 20:35:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:26:35.626 20:35:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:35.626 20:35:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:26:35.626 20:35:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:35.626 20:35:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.626 request: 00:26:35.626 { 00:26:35.626 "name": "nvme0", 00:26:35.626 "trtype": "rdma", 00:26:35.626 "traddr": "192.168.100.8", 00:26:35.626 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:26:35.626 "adrfam": "ipv4", 00:26:35.626 "trsvcid": "4420", 00:26:35.626 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:26:35.626 "dhchap_key": "key2", 00:26:35.626 "method": "bdev_nvme_attach_controller", 00:26:35.626 "req_id": 1 00:26:35.626 } 00:26:35.626 Got JSON-RPC error response 00:26:35.626 response: 00:26:35.626 { 00:26:35.626 "code": -32602, 00:26:35.626 "message": "Invalid parameters" 00:26:35.626 } 00:26:35.626 20:35:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:26:35.626 20:35:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:26:35.626 20:35:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:26:35.626 20:35:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:26:35.626 20:35:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:26:35.626 20:35:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:26:35.626 20:35:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:26:35.626 20:35:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:35.626 20:35:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.626 20:35:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:35.626 20:35:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:26:35.627 20:35:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:26:35.627 20:35:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:35.627 20:35:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:35.627 20:35:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:35.627 20:35:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:35.627 20:35:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:35.627 20:35:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:35.627 20:35:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:35.627 20:35:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:35.627 20:35:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:35.627 20:35:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:35.627 20:35:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:26:35.627 20:35:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:26:35.627 20:35:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:26:35.627 20:35:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:26:35.627 20:35:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:35.627 20:35:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:26:35.627 20:35:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:35.627 20:35:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:26:35.627 20:35:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:35.627 20:35:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.885 request: 00:26:35.885 { 00:26:35.885 "name": "nvme0", 00:26:35.885 "trtype": "rdma", 00:26:35.885 "traddr": "192.168.100.8", 00:26:35.885 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:26:35.885 "adrfam": "ipv4", 00:26:35.885 "trsvcid": "4420", 00:26:35.885 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:26:35.885 "dhchap_key": "key1", 00:26:35.885 "dhchap_ctrlr_key": "ckey2", 00:26:35.885 "method": "bdev_nvme_attach_controller", 00:26:35.885 "req_id": 1 00:26:35.885 } 00:26:35.885 Got JSON-RPC error response 00:26:35.885 response: 00:26:35.885 { 00:26:35.885 "code": -32602, 00:26:35.885 "message": "Invalid parameters" 00:26:35.885 } 00:26:35.885 20:35:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:26:35.885 20:35:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:26:35.885 20:35:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:26:35.885 20:35:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:26:35.885 20:35:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:26:35.885 20:35:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:26:35.885 20:35:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:26:35.885 20:35:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:26:35.885 20:35:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:35.885 20:35:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:26:35.886 20:35:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:26:35.886 20:35:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:26:35.886 20:35:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:26:35.886 20:35:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:35.886 20:35:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:26:35.886 rmmod nvme_rdma 00:26:35.886 rmmod nvme_fabrics 00:26:35.886 20:35:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:35.886 20:35:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:26:35.886 20:35:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:26:35.886 20:35:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 3190092 ']' 00:26:35.886 20:35:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 3190092 00:26:35.886 20:35:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@946 -- # '[' -z 3190092 ']' 00:26:35.886 20:35:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@950 -- # kill -0 3190092 00:26:35.886 20:35:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@951 -- # uname 00:26:35.886 20:35:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:26:35.886 20:35:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3190092 00:26:35.886 20:35:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:26:35.886 20:35:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:26:35.886 20:35:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3190092' 00:26:35.886 killing process with pid 3190092 00:26:35.886 20:35:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@965 -- # kill 3190092 00:26:35.886 20:35:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@970 -- # wait 3190092 00:26:36.142 20:35:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:36.142 20:35:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:26:36.142 20:35:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:26:36.142 20:35:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:26:36.142 20:35:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:26:36.142 20:35:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:26:36.142 20:35:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:26:36.142 20:35:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:36.142 20:35:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:26:36.142 20:35:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:26:36.142 20:35:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:36.142 20:35:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:26:36.142 20:35:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_rdma nvmet 00:26:36.142 20:35:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:26:39.423 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:26:39.423 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:26:39.423 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:26:39.423 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:26:39.423 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:26:39.423 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:26:39.423 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:26:39.423 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:26:39.423 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:26:39.423 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:26:39.423 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:26:39.423 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:26:39.423 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:26:39.423 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:26:39.423 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:26:39.423 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:26:40.796 0000:5f:00.0 (8086 0a54): nvme -> vfio-pci 00:26:40.796 20:35:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.YUI /tmp/spdk.key-null.MMY /tmp/spdk.key-sha256.QOU /tmp/spdk.key-sha384.Wrq /tmp/spdk.key-sha512.1ZP /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvme-auth.log 00:26:40.796 20:35:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:26:44.078 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:26:44.078 0000:5f:00.0 (8086 0a54): Already using the vfio-pci driver 00:26:44.078 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:26:44.078 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:26:44.078 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:26:44.078 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:26:44.078 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:26:44.078 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:26:44.078 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:26:44.078 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:26:44.078 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:26:44.078 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:26:44.078 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:26:44.078 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:26:44.078 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:26:44.078 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:26:44.078 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:26:44.078 00:26:44.078 real 0m54.890s 00:26:44.078 user 0m49.921s 00:26:44.078 sys 0m13.301s 00:26:44.078 20:35:56 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@1122 -- # xtrace_disable 00:26:44.078 20:35:56 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.078 ************************************ 00:26:44.078 END TEST nvmf_auth_host 00:26:44.078 ************************************ 00:26:44.078 20:35:56 nvmf_rdma -- nvmf/nvmf.sh@106 -- # [[ rdma == \t\c\p ]] 00:26:44.078 20:35:56 nvmf_rdma -- nvmf/nvmf.sh@110 -- # [[ 0 -eq 1 ]] 00:26:44.078 20:35:56 nvmf_rdma -- nvmf/nvmf.sh@115 -- # [[ 0 -eq 1 ]] 00:26:44.078 20:35:56 nvmf_rdma -- nvmf/nvmf.sh@120 -- # [[ phy == phy ]] 00:26:44.078 20:35:56 nvmf_rdma -- nvmf/nvmf.sh@121 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=rdma 00:26:44.078 20:35:56 nvmf_rdma -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:26:44.078 20:35:56 nvmf_rdma -- common/autotest_common.sh@1103 -- # xtrace_disable 00:26:44.078 20:35:56 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:26:44.078 ************************************ 00:26:44.078 START TEST nvmf_bdevperf 00:26:44.078 ************************************ 00:26:44.078 20:35:56 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=rdma 00:26:44.078 * Looking for test storage... 00:26:44.078 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:26:44.078 20:35:56 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:26:44.078 20:35:56 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:26:44.078 20:35:56 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:44.078 20:35:56 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:44.079 20:35:56 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:44.079 20:35:56 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:44.079 20:35:56 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:44.079 20:35:56 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:44.079 20:35:56 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:44.079 20:35:56 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:44.079 20:35:56 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:44.079 20:35:56 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:44.079 20:35:56 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:26:44.079 20:35:56 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:26:44.079 20:35:56 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:44.079 20:35:56 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:44.079 20:35:56 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:44.079 20:35:56 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:44.079 20:35:56 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:26:44.079 20:35:56 nvmf_rdma.nvmf_bdevperf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:44.079 20:35:56 nvmf_rdma.nvmf_bdevperf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:44.079 20:35:56 nvmf_rdma.nvmf_bdevperf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:44.079 20:35:56 nvmf_rdma.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:44.079 20:35:56 nvmf_rdma.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:44.079 20:35:56 nvmf_rdma.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:44.079 20:35:56 nvmf_rdma.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:26:44.079 20:35:56 nvmf_rdma.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:44.079 20:35:56 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@47 -- # : 0 00:26:44.079 20:35:56 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:44.079 20:35:56 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:44.079 20:35:56 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:44.079 20:35:56 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:44.079 20:35:56 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:44.079 20:35:56 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:44.079 20:35:56 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:44.079 20:35:56 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:44.079 20:35:56 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:44.079 20:35:56 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:44.079 20:35:56 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:26:44.079 20:35:56 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:26:44.079 20:35:56 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:44.079 20:35:56 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:44.079 20:35:56 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:44.079 20:35:56 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:44.079 20:35:56 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:44.079 20:35:56 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:44.079 20:35:56 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:44.079 20:35:56 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:44.079 20:35:56 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:44.079 20:35:56 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@285 -- # xtrace_disable 00:26:44.079 20:35:56 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:50.737 20:36:02 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:50.737 20:36:02 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@291 -- # pci_devs=() 00:26:50.737 20:36:02 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:50.737 20:36:02 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:50.737 20:36:02 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:50.737 20:36:02 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:50.737 20:36:02 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:50.737 20:36:02 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@295 -- # net_devs=() 00:26:50.737 20:36:02 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:50.737 20:36:02 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@296 -- # e810=() 00:26:50.737 20:36:02 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@296 -- # local -ga e810 00:26:50.737 20:36:02 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@297 -- # x722=() 00:26:50.737 20:36:02 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@297 -- # local -ga x722 00:26:50.737 20:36:02 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@298 -- # mlx=() 00:26:50.737 20:36:02 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@298 -- # local -ga mlx 00:26:50.737 20:36:02 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:50.737 20:36:02 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:50.737 20:36:02 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:50.737 20:36:02 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:50.737 20:36:02 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:50.737 20:36:02 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:50.737 20:36:02 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:50.737 20:36:02 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:50.737 20:36:02 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:50.737 20:36:02 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:50.737 20:36:02 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:50.737 20:36:02 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:50.737 20:36:02 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:26:50.737 20:36:02 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:26:50.737 20:36:02 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:26:50.737 20:36:02 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:26:50.737 20:36:02 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:26:50.737 20:36:02 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:50.737 20:36:02 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:50.737 20:36:02 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:26:50.737 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:26:50.737 20:36:02 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:26:50.737 20:36:02 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:26:50.737 20:36:02 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:26:50.737 20:36:02 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:26:50.737 20:36:02 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:26:50.737 20:36:02 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:26:50.737 20:36:02 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:50.737 20:36:02 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:26:50.737 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:26:50.737 20:36:02 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:26:50.737 20:36:02 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:26:50.737 20:36:02 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:26:50.737 20:36:02 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:26:50.737 20:36:02 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:26:50.737 20:36:02 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:26:50.737 20:36:02 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:50.737 20:36:02 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:26:50.737 20:36:02 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:50.737 20:36:02 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:50.737 20:36:02 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:26:50.737 20:36:02 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:50.737 20:36:02 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:50.737 20:36:02 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:26:50.737 Found net devices under 0000:da:00.0: mlx_0_0 00:26:50.737 20:36:02 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:50.737 20:36:02 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:50.737 20:36:02 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:50.737 20:36:02 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:26:50.737 20:36:02 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:50.737 20:36:02 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:50.738 20:36:02 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:26:50.738 Found net devices under 0000:da:00.1: mlx_0_1 00:26:50.738 20:36:02 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:50.738 20:36:02 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:50.738 20:36:02 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@414 -- # is_hw=yes 00:26:50.738 20:36:02 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:50.738 20:36:02 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:26:50.738 20:36:02 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:26:50.738 20:36:02 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@420 -- # rdma_device_init 00:26:50.738 20:36:02 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:26:50.738 20:36:02 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@58 -- # uname 00:26:50.738 20:36:02 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:26:50.738 20:36:02 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@62 -- # modprobe ib_cm 00:26:50.738 20:36:02 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@63 -- # modprobe ib_core 00:26:50.738 20:36:02 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@64 -- # modprobe ib_umad 00:26:50.738 20:36:02 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:26:50.738 20:36:02 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@66 -- # modprobe iw_cm 00:26:50.738 20:36:02 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:26:50.738 20:36:02 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:26:50.738 20:36:02 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@502 -- # allocate_nic_ips 00:26:50.738 20:36:02 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:26:50.738 20:36:02 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@73 -- # get_rdma_if_list 00:26:50.738 20:36:02 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:26:50.738 20:36:02 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:26:50.738 20:36:02 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:26:50.738 20:36:02 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:26:50.738 20:36:02 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:26:50.738 20:36:02 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:26:50.738 20:36:02 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:50.738 20:36:02 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:26:50.738 20:36:02 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@104 -- # echo mlx_0_0 00:26:50.738 20:36:02 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@105 -- # continue 2 00:26:50.738 20:36:02 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:26:50.738 20:36:02 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:50.738 20:36:02 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:26:50.738 20:36:02 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:50.738 20:36:02 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:26:50.738 20:36:02 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@104 -- # echo mlx_0_1 00:26:50.738 20:36:02 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@105 -- # continue 2 00:26:50.738 20:36:02 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:26:50.738 20:36:02 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:26:50.738 20:36:02 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:26:50.738 20:36:02 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:26:50.738 20:36:02 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@113 -- # awk '{print $4}' 00:26:50.738 20:36:02 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@113 -- # cut -d/ -f1 00:26:50.738 20:36:02 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:26:50.738 20:36:02 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:26:50.738 20:36:02 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:26:50.738 262: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:26:50.738 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:26:50.738 altname enp218s0f0np0 00:26:50.738 altname ens818f0np0 00:26:50.738 inet 192.168.100.8/24 scope global mlx_0_0 00:26:50.738 valid_lft forever preferred_lft forever 00:26:50.738 20:36:02 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:26:50.738 20:36:02 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:26:50.738 20:36:02 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:26:50.738 20:36:02 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:26:50.738 20:36:02 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@113 -- # awk '{print $4}' 00:26:50.738 20:36:02 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@113 -- # cut -d/ -f1 00:26:50.738 20:36:02 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:26:50.738 20:36:02 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:26:50.738 20:36:02 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:26:50.738 263: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:26:50.738 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:26:50.738 altname enp218s0f1np1 00:26:50.738 altname ens818f1np1 00:26:50.738 inet 192.168.100.9/24 scope global mlx_0_1 00:26:50.738 valid_lft forever preferred_lft forever 00:26:50.738 20:36:02 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@422 -- # return 0 00:26:50.738 20:36:02 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:50.738 20:36:02 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:26:50.738 20:36:02 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:26:50.738 20:36:02 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:26:50.738 20:36:02 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@86 -- # get_rdma_if_list 00:26:50.738 20:36:02 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:26:50.738 20:36:02 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:26:50.738 20:36:02 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:26:50.738 20:36:02 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:26:50.738 20:36:03 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:26:50.738 20:36:03 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:26:50.738 20:36:03 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:50.738 20:36:03 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:26:50.738 20:36:03 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@104 -- # echo mlx_0_0 00:26:50.738 20:36:03 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@105 -- # continue 2 00:26:50.738 20:36:03 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:26:50.738 20:36:03 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:50.738 20:36:03 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:26:50.738 20:36:03 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:50.738 20:36:03 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:26:50.738 20:36:03 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@104 -- # echo mlx_0_1 00:26:50.738 20:36:03 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@105 -- # continue 2 00:26:50.738 20:36:03 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:26:50.738 20:36:03 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:26:50.738 20:36:03 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:26:50.738 20:36:03 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:26:50.738 20:36:03 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@113 -- # awk '{print $4}' 00:26:50.738 20:36:03 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@113 -- # cut -d/ -f1 00:26:50.738 20:36:03 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:26:50.738 20:36:03 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:26:50.738 20:36:03 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:26:50.738 20:36:03 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:26:50.738 20:36:03 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@113 -- # awk '{print $4}' 00:26:50.738 20:36:03 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@113 -- # cut -d/ -f1 00:26:50.738 20:36:03 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:26:50.738 192.168.100.9' 00:26:50.738 20:36:03 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:26:50.738 192.168.100.9' 00:26:50.738 20:36:03 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@457 -- # head -n 1 00:26:50.738 20:36:03 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:26:50.738 20:36:03 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:26:50.738 192.168.100.9' 00:26:50.738 20:36:03 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@458 -- # tail -n +2 00:26:50.738 20:36:03 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@458 -- # head -n 1 00:26:50.738 20:36:03 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:26:50.738 20:36:03 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:26:50.738 20:36:03 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:26:50.738 20:36:03 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:26:50.738 20:36:03 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:26:50.738 20:36:03 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:26:50.738 20:36:03 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:26:50.738 20:36:03 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:26:50.738 20:36:03 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:50.738 20:36:03 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@720 -- # xtrace_disable 00:26:50.738 20:36:03 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:50.738 20:36:03 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=3204713 00:26:50.738 20:36:03 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 3204713 00:26:50.738 20:36:03 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:26:50.738 20:36:03 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@827 -- # '[' -z 3204713 ']' 00:26:50.738 20:36:03 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:50.738 20:36:03 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@832 -- # local max_retries=100 00:26:50.738 20:36:03 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:50.738 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:50.738 20:36:03 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@836 -- # xtrace_disable 00:26:50.738 20:36:03 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:50.739 [2024-05-16 20:36:03.136118] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:26:50.739 [2024-05-16 20:36:03.136172] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:50.739 EAL: No free 2048 kB hugepages reported on node 1 00:26:50.739 [2024-05-16 20:36:03.200163] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:50.739 [2024-05-16 20:36:03.280552] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:50.739 [2024-05-16 20:36:03.280591] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:50.739 [2024-05-16 20:36:03.280598] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:50.739 [2024-05-16 20:36:03.280604] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:50.739 [2024-05-16 20:36:03.280609] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:50.739 [2024-05-16 20:36:03.282441] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:50.739 [2024-05-16 20:36:03.282517] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:26:50.739 [2024-05-16 20:36:03.282519] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:50.996 20:36:03 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:26:50.996 20:36:03 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@860 -- # return 0 00:26:50.996 20:36:03 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:50.996 20:36:03 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:50.996 20:36:03 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:50.996 20:36:03 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:50.996 20:36:03 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:26:50.996 20:36:03 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:50.996 20:36:03 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:51.260 [2024-05-16 20:36:04.008286] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x8ec110/0x8f0600) succeed. 00:26:51.260 [2024-05-16 20:36:04.018521] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x8ed6b0/0x931c90) succeed. 00:26:51.260 20:36:04 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:51.260 20:36:04 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:51.260 20:36:04 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:51.260 20:36:04 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:51.260 Malloc0 00:26:51.260 20:36:04 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:51.260 20:36:04 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:51.260 20:36:04 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:51.260 20:36:04 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:51.260 20:36:04 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:51.260 20:36:04 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:51.260 20:36:04 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:51.260 20:36:04 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:51.260 20:36:04 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:51.260 20:36:04 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:26:51.260 20:36:04 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:51.260 20:36:04 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:51.260 [2024-05-16 20:36:04.160804] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:26:51.260 [2024-05-16 20:36:04.161190] rdma.c:3032:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:26:51.260 20:36:04 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:51.260 20:36:04 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:26:51.260 20:36:04 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:26:51.260 20:36:04 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:26:51.260 20:36:04 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:26:51.260 20:36:04 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:51.260 20:36:04 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:51.260 { 00:26:51.260 "params": { 00:26:51.260 "name": "Nvme$subsystem", 00:26:51.260 "trtype": "$TEST_TRANSPORT", 00:26:51.260 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:51.260 "adrfam": "ipv4", 00:26:51.260 "trsvcid": "$NVMF_PORT", 00:26:51.260 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:51.260 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:51.260 "hdgst": ${hdgst:-false}, 00:26:51.260 "ddgst": ${ddgst:-false} 00:26:51.260 }, 00:26:51.260 "method": "bdev_nvme_attach_controller" 00:26:51.260 } 00:26:51.260 EOF 00:26:51.260 )") 00:26:51.260 20:36:04 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:26:51.260 20:36:04 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:26:51.260 20:36:04 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:26:51.260 20:36:04 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:26:51.260 "params": { 00:26:51.260 "name": "Nvme1", 00:26:51.260 "trtype": "rdma", 00:26:51.260 "traddr": "192.168.100.8", 00:26:51.260 "adrfam": "ipv4", 00:26:51.260 "trsvcid": "4420", 00:26:51.260 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:51.260 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:51.260 "hdgst": false, 00:26:51.260 "ddgst": false 00:26:51.260 }, 00:26:51.260 "method": "bdev_nvme_attach_controller" 00:26:51.260 }' 00:26:51.260 [2024-05-16 20:36:04.208957] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:26:51.260 [2024-05-16 20:36:04.208999] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3204963 ] 00:26:51.260 EAL: No free 2048 kB hugepages reported on node 1 00:26:51.520 [2024-05-16 20:36:04.270939] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:51.520 [2024-05-16 20:36:04.347252] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:51.777 Running I/O for 1 seconds... 00:26:52.710 00:26:52.710 Latency(us) 00:26:52.710 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:52.710 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:26:52.710 Verification LBA range: start 0x0 length 0x4000 00:26:52.710 Nvme1n1 : 1.01 17798.60 69.53 0.00 0.00 7151.35 2621.44 12046.14 00:26:52.710 =================================================================================================================== 00:26:52.710 Total : 17798.60 69.53 0.00 0.00 7151.35 2621.44 12046.14 00:26:52.968 20:36:05 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=3205356 00:26:52.968 20:36:05 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:26:52.968 20:36:05 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:26:52.968 20:36:05 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:26:52.968 20:36:05 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:26:52.968 20:36:05 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:26:52.968 20:36:05 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:52.969 20:36:05 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:52.969 { 00:26:52.969 "params": { 00:26:52.969 "name": "Nvme$subsystem", 00:26:52.969 "trtype": "$TEST_TRANSPORT", 00:26:52.969 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:52.969 "adrfam": "ipv4", 00:26:52.969 "trsvcid": "$NVMF_PORT", 00:26:52.969 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:52.969 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:52.969 "hdgst": ${hdgst:-false}, 00:26:52.969 "ddgst": ${ddgst:-false} 00:26:52.969 }, 00:26:52.969 "method": "bdev_nvme_attach_controller" 00:26:52.969 } 00:26:52.969 EOF 00:26:52.969 )") 00:26:52.969 20:36:05 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:26:52.969 20:36:05 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:26:52.969 20:36:05 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:26:52.969 20:36:05 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:26:52.969 "params": { 00:26:52.969 "name": "Nvme1", 00:26:52.969 "trtype": "rdma", 00:26:52.969 "traddr": "192.168.100.8", 00:26:52.969 "adrfam": "ipv4", 00:26:52.969 "trsvcid": "4420", 00:26:52.969 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:52.969 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:52.969 "hdgst": false, 00:26:52.969 "ddgst": false 00:26:52.969 }, 00:26:52.969 "method": "bdev_nvme_attach_controller" 00:26:52.969 }' 00:26:52.969 [2024-05-16 20:36:05.778071] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:26:52.969 [2024-05-16 20:36:05.778120] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3205356 ] 00:26:52.969 EAL: No free 2048 kB hugepages reported on node 1 00:26:52.969 [2024-05-16 20:36:05.838647] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:52.969 [2024-05-16 20:36:05.913075] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:53.226 Running I/O for 15 seconds... 00:26:55.753 20:36:08 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 3204713 00:26:55.753 20:36:08 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:26:57.127 [2024-05-16 20:36:09.773417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:117328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.127 [2024-05-16 20:36:09.773457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:2600 p:0 m:0 dnr:0 00:26:57.127 [2024-05-16 20:36:09.773472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:117336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.127 [2024-05-16 20:36:09.773496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:2600 p:0 m:0 dnr:0 00:26:57.127 [2024-05-16 20:36:09.773505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:117344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.127 [2024-05-16 20:36:09.773512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:2600 p:0 m:0 dnr:0 00:26:57.127 [2024-05-16 20:36:09.773520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:117352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.127 [2024-05-16 20:36:09.773526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:2600 p:0 m:0 dnr:0 00:26:57.127 [2024-05-16 20:36:09.773534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:117360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.127 [2024-05-16 20:36:09.773541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:2600 p:0 m:0 dnr:0 00:26:57.127 [2024-05-16 20:36:09.773548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:117368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.127 [2024-05-16 20:36:09.773555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:2600 p:0 m:0 dnr:0 00:26:57.127 [2024-05-16 20:36:09.773562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:117376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.127 [2024-05-16 20:36:09.773573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:2600 p:0 m:0 dnr:0 00:26:57.127 [2024-05-16 20:36:09.773581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:117384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.127 [2024-05-16 20:36:09.773587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:2600 p:0 m:0 dnr:0 00:26:57.127 [2024-05-16 20:36:09.773595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:117392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.127 [2024-05-16 20:36:09.773601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:2600 p:0 m:0 dnr:0 00:26:57.127 [2024-05-16 20:36:09.773609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:117400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.128 [2024-05-16 20:36:09.773616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:2600 p:0 m:0 dnr:0 00:26:57.128 [2024-05-16 20:36:09.773623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:117408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.128 [2024-05-16 20:36:09.773630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:2600 p:0 m:0 dnr:0 00:26:57.128 [2024-05-16 20:36:09.773638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:117416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.128 [2024-05-16 20:36:09.773645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:2600 p:0 m:0 dnr:0 00:26:57.128 [2024-05-16 20:36:09.773653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:117424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.128 [2024-05-16 20:36:09.773660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:2600 p:0 m:0 dnr:0 00:26:57.128 [2024-05-16 20:36:09.773668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:117432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.128 [2024-05-16 20:36:09.773674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:2600 p:0 m:0 dnr:0 00:26:57.128 [2024-05-16 20:36:09.773681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:117440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.128 [2024-05-16 20:36:09.773688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:2600 p:0 m:0 dnr:0 00:26:57.128 [2024-05-16 20:36:09.773695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:117448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.128 [2024-05-16 20:36:09.773701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:2600 p:0 m:0 dnr:0 00:26:57.128 [2024-05-16 20:36:09.773709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:117456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.128 [2024-05-16 20:36:09.773715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:2600 p:0 m:0 dnr:0 00:26:57.128 [2024-05-16 20:36:09.773724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:117464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.128 [2024-05-16 20:36:09.773730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:2600 p:0 m:0 dnr:0 00:26:57.128 [2024-05-16 20:36:09.773738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:117472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.128 [2024-05-16 20:36:09.773746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:2600 p:0 m:0 dnr:0 00:26:57.128 [2024-05-16 20:36:09.773754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:117480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.128 [2024-05-16 20:36:09.773760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:2600 p:0 m:0 dnr:0 00:26:57.128 [2024-05-16 20:36:09.773768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:117488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.128 [2024-05-16 20:36:09.773774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:2600 p:0 m:0 dnr:0 00:26:57.128 [2024-05-16 20:36:09.773782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:117496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.128 [2024-05-16 20:36:09.773788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:2600 p:0 m:0 dnr:0 00:26:57.128 [2024-05-16 20:36:09.773796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:117504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.128 [2024-05-16 20:36:09.773802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:2600 p:0 m:0 dnr:0 00:26:57.128 [2024-05-16 20:36:09.773810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:117512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.128 [2024-05-16 20:36:09.773817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:2600 p:0 m:0 dnr:0 00:26:57.128 [2024-05-16 20:36:09.773825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:117520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.128 [2024-05-16 20:36:09.773831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:2600 p:0 m:0 dnr:0 00:26:57.128 [2024-05-16 20:36:09.773839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:117528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.128 [2024-05-16 20:36:09.773845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:2600 p:0 m:0 dnr:0 00:26:57.128 [2024-05-16 20:36:09.773852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:117536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.128 [2024-05-16 20:36:09.773858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:2600 p:0 m:0 dnr:0 00:26:57.128 [2024-05-16 20:36:09.773866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:117544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.128 [2024-05-16 20:36:09.773872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:2600 p:0 m:0 dnr:0 00:26:57.128 [2024-05-16 20:36:09.773880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:117552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.128 [2024-05-16 20:36:09.773886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:2600 p:0 m:0 dnr:0 00:26:57.128 [2024-05-16 20:36:09.773894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:117560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.128 [2024-05-16 20:36:09.773900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:2600 p:0 m:0 dnr:0 00:26:57.128 [2024-05-16 20:36:09.773907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:117568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.128 [2024-05-16 20:36:09.773915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:2600 p:0 m:0 dnr:0 00:26:57.128 [2024-05-16 20:36:09.773923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:117576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.128 [2024-05-16 20:36:09.773929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:2600 p:0 m:0 dnr:0 00:26:57.128 [2024-05-16 20:36:09.773936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:117584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.128 [2024-05-16 20:36:09.773943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:2600 p:0 m:0 dnr:0 00:26:57.128 [2024-05-16 20:36:09.773950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:117592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.128 [2024-05-16 20:36:09.773966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:2600 p:0 m:0 dnr:0 00:26:57.128 [2024-05-16 20:36:09.773974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:117600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.128 [2024-05-16 20:36:09.773980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:2600 p:0 m:0 dnr:0 00:26:57.128 [2024-05-16 20:36:09.773988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:117608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.128 [2024-05-16 20:36:09.773994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:2600 p:0 m:0 dnr:0 00:26:57.128 [2024-05-16 20:36:09.774002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:117616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.128 [2024-05-16 20:36:09.774008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:2600 p:0 m:0 dnr:0 00:26:57.128 [2024-05-16 20:36:09.774015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:117624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.128 [2024-05-16 20:36:09.774021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:2600 p:0 m:0 dnr:0 00:26:57.128 [2024-05-16 20:36:09.774029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:117632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.128 [2024-05-16 20:36:09.774035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:2600 p:0 m:0 dnr:0 00:26:57.128 [2024-05-16 20:36:09.774043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:117640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.128 [2024-05-16 20:36:09.774048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:2600 p:0 m:0 dnr:0 00:26:57.128 [2024-05-16 20:36:09.774056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:117648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.128 [2024-05-16 20:36:09.774062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:2600 p:0 m:0 dnr:0 00:26:57.128 [2024-05-16 20:36:09.774070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:117656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.128 [2024-05-16 20:36:09.774076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:2600 p:0 m:0 dnr:0 00:26:57.128 [2024-05-16 20:36:09.774084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:117664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.128 [2024-05-16 20:36:09.774090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:2600 p:0 m:0 dnr:0 00:26:57.128 [2024-05-16 20:36:09.774099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:117672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.128 [2024-05-16 20:36:09.774105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:2600 p:0 m:0 dnr:0 00:26:57.128 [2024-05-16 20:36:09.774113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:117680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.128 [2024-05-16 20:36:09.774119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:2600 p:0 m:0 dnr:0 00:26:57.128 [2024-05-16 20:36:09.774127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:117688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.128 [2024-05-16 20:36:09.774133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:2600 p:0 m:0 dnr:0 00:26:57.128 [2024-05-16 20:36:09.774140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:117696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.128 [2024-05-16 20:36:09.774146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:2600 p:0 m:0 dnr:0 00:26:57.128 [2024-05-16 20:36:09.774154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:117704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.128 [2024-05-16 20:36:09.774160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:2600 p:0 m:0 dnr:0 00:26:57.128 [2024-05-16 20:36:09.774168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:117712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.128 [2024-05-16 20:36:09.774174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:2600 p:0 m:0 dnr:0 00:26:57.128 [2024-05-16 20:36:09.774181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:117720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.129 [2024-05-16 20:36:09.774189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:2600 p:0 m:0 dnr:0 00:26:57.129 [2024-05-16 20:36:09.774197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:117728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.129 [2024-05-16 20:36:09.774203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:2600 p:0 m:0 dnr:0 00:26:57.129 [2024-05-16 20:36:09.774211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:117736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.129 [2024-05-16 20:36:09.774217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:2600 p:0 m:0 dnr:0 00:26:57.129 [2024-05-16 20:36:09.774225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:117744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.129 [2024-05-16 20:36:09.774231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:2600 p:0 m:0 dnr:0 00:26:57.129 [2024-05-16 20:36:09.774239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:117752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.129 [2024-05-16 20:36:09.774245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:2600 p:0 m:0 dnr:0 00:26:57.129 [2024-05-16 20:36:09.774253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:116736 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007500000 len:0x1000 key:0x3b200 00:26:57.129 [2024-05-16 20:36:09.774260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:2600 p:0 m:0 dnr:0 00:26:57.129 [2024-05-16 20:36:09.774272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:116744 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007502000 len:0x1000 key:0x3b200 00:26:57.129 [2024-05-16 20:36:09.774278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:2600 p:0 m:0 dnr:0 00:26:57.129 [2024-05-16 20:36:09.774286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:116752 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007504000 len:0x1000 key:0x3b200 00:26:57.129 [2024-05-16 20:36:09.774292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:2600 p:0 m:0 dnr:0 00:26:57.129 [2024-05-16 20:36:09.774300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:116760 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007506000 len:0x1000 key:0x3b200 00:26:57.129 [2024-05-16 20:36:09.774307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:2600 p:0 m:0 dnr:0 00:26:57.129 [2024-05-16 20:36:09.774314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:116768 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007508000 len:0x1000 key:0x3b200 00:26:57.129 [2024-05-16 20:36:09.774320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:2600 p:0 m:0 dnr:0 00:26:57.129 [2024-05-16 20:36:09.774328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:116776 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750a000 len:0x1000 key:0x3b200 00:26:57.129 [2024-05-16 20:36:09.774335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:2600 p:0 m:0 dnr:0 00:26:57.129 [2024-05-16 20:36:09.774342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:116784 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750c000 len:0x1000 key:0x3b200 00:26:57.129 [2024-05-16 20:36:09.774349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:2600 p:0 m:0 dnr:0 00:26:57.129 [2024-05-16 20:36:09.774356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:116792 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750e000 len:0x1000 key:0x3b200 00:26:57.129 [2024-05-16 20:36:09.774363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:2600 p:0 m:0 dnr:0 00:26:57.129 [2024-05-16 20:36:09.774370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:116800 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007510000 len:0x1000 key:0x3b200 00:26:57.129 [2024-05-16 20:36:09.774377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:2600 p:0 m:0 dnr:0 00:26:57.129 [2024-05-16 20:36:09.774384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:116808 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007512000 len:0x1000 key:0x3b200 00:26:57.129 [2024-05-16 20:36:09.774390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:2600 p:0 m:0 dnr:0 00:26:57.129 [2024-05-16 20:36:09.774398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:116816 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007514000 len:0x1000 key:0x3b200 00:26:57.129 [2024-05-16 20:36:09.774404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:2600 p:0 m:0 dnr:0 00:26:57.129 [2024-05-16 20:36:09.774412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:116824 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007516000 len:0x1000 key:0x3b200 00:26:57.129 [2024-05-16 20:36:09.774419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:2600 p:0 m:0 dnr:0 00:26:57.129 [2024-05-16 20:36:09.774431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:116832 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007518000 len:0x1000 key:0x3b200 00:26:57.129 [2024-05-16 20:36:09.774439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:2600 p:0 m:0 dnr:0 00:26:57.129 [2024-05-16 20:36:09.774447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:116840 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751a000 len:0x1000 key:0x3b200 00:26:57.129 [2024-05-16 20:36:09.774453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:2600 p:0 m:0 dnr:0 00:26:57.129 [2024-05-16 20:36:09.774461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:116848 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751c000 len:0x1000 key:0x3b200 00:26:57.129 [2024-05-16 20:36:09.774468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:2600 p:0 m:0 dnr:0 00:26:57.129 [2024-05-16 20:36:09.774475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:116856 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751e000 len:0x1000 key:0x3b200 00:26:57.129 [2024-05-16 20:36:09.774482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:2600 p:0 m:0 dnr:0 00:26:57.129 [2024-05-16 20:36:09.774490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:116864 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007520000 len:0x1000 key:0x3b200 00:26:57.129 [2024-05-16 20:36:09.774496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:2600 p:0 m:0 dnr:0 00:26:57.129 [2024-05-16 20:36:09.774504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:116872 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007522000 len:0x1000 key:0x3b200 00:26:57.129 [2024-05-16 20:36:09.774510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:2600 p:0 m:0 dnr:0 00:26:57.129 [2024-05-16 20:36:09.774518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:116880 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007524000 len:0x1000 key:0x3b200 00:26:57.129 [2024-05-16 20:36:09.774524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:2600 p:0 m:0 dnr:0 00:26:57.129 [2024-05-16 20:36:09.774532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:116888 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007526000 len:0x1000 key:0x3b200 00:26:57.129 [2024-05-16 20:36:09.774538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:2600 p:0 m:0 dnr:0 00:26:57.129 [2024-05-16 20:36:09.774546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:116896 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007528000 len:0x1000 key:0x3b200 00:26:57.129 [2024-05-16 20:36:09.774552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:2600 p:0 m:0 dnr:0 00:26:57.129 [2024-05-16 20:36:09.774560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:116904 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752a000 len:0x1000 key:0x3b200 00:26:57.129 [2024-05-16 20:36:09.774566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:2600 p:0 m:0 dnr:0 00:26:57.129 [2024-05-16 20:36:09.774574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:116912 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752c000 len:0x1000 key:0x3b200 00:26:57.129 [2024-05-16 20:36:09.774581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:2600 p:0 m:0 dnr:0 00:26:57.129 [2024-05-16 20:36:09.774588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:116920 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752e000 len:0x1000 key:0x3b200 00:26:57.129 [2024-05-16 20:36:09.774596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:2600 p:0 m:0 dnr:0 00:26:57.129 [2024-05-16 20:36:09.774604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:116928 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007530000 len:0x1000 key:0x3b200 00:26:57.129 [2024-05-16 20:36:09.774610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:2600 p:0 m:0 dnr:0 00:26:57.129 [2024-05-16 20:36:09.774618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:116936 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007532000 len:0x1000 key:0x3b200 00:26:57.129 [2024-05-16 20:36:09.774624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:2600 p:0 m:0 dnr:0 00:26:57.129 [2024-05-16 20:36:09.774632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:116944 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007534000 len:0x1000 key:0x3b200 00:26:57.129 [2024-05-16 20:36:09.774638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:2600 p:0 m:0 dnr:0 00:26:57.129 [2024-05-16 20:36:09.774645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:116952 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007536000 len:0x1000 key:0x3b200 00:26:57.129 [2024-05-16 20:36:09.774652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:2600 p:0 m:0 dnr:0 00:26:57.129 [2024-05-16 20:36:09.774660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:116960 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007538000 len:0x1000 key:0x3b200 00:26:57.130 [2024-05-16 20:36:09.774666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:2600 p:0 m:0 dnr:0 00:26:57.130 [2024-05-16 20:36:09.774674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:116968 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753a000 len:0x1000 key:0x3b200 00:26:57.130 [2024-05-16 20:36:09.774680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:2600 p:0 m:0 dnr:0 00:26:57.130 [2024-05-16 20:36:09.774688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:116976 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753c000 len:0x1000 key:0x3b200 00:26:57.130 [2024-05-16 20:36:09.774694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:2600 p:0 m:0 dnr:0 00:26:57.130 [2024-05-16 20:36:09.774701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:116984 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753e000 len:0x1000 key:0x3b200 00:26:57.130 [2024-05-16 20:36:09.774708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:2600 p:0 m:0 dnr:0 00:26:57.130 [2024-05-16 20:36:09.774715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:116992 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007540000 len:0x1000 key:0x3b200 00:26:57.130 [2024-05-16 20:36:09.774722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:2600 p:0 m:0 dnr:0 00:26:57.130 [2024-05-16 20:36:09.774730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:117000 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007542000 len:0x1000 key:0x3b200 00:26:57.130 [2024-05-16 20:36:09.774737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:2600 p:0 m:0 dnr:0 00:26:57.130 [2024-05-16 20:36:09.774744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:117008 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007544000 len:0x1000 key:0x3b200 00:26:57.130 [2024-05-16 20:36:09.774750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:2600 p:0 m:0 dnr:0 00:26:57.130 [2024-05-16 20:36:09.774760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:117016 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007546000 len:0x1000 key:0x3b200 00:26:57.130 [2024-05-16 20:36:09.774766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:2600 p:0 m:0 dnr:0 00:26:57.130 [2024-05-16 20:36:09.774784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:117024 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007548000 len:0x1000 key:0x3b200 00:26:57.130 [2024-05-16 20:36:09.774790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:2600 p:0 m:0 dnr:0 00:26:57.130 [2024-05-16 20:36:09.774798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:117032 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754a000 len:0x1000 key:0x3b200 00:26:57.130 [2024-05-16 20:36:09.774804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:2600 p:0 m:0 dnr:0 00:26:57.130 [2024-05-16 20:36:09.774811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:117040 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754c000 len:0x1000 key:0x3b200 00:26:57.130 [2024-05-16 20:36:09.774817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:2600 p:0 m:0 dnr:0 00:26:57.130 [2024-05-16 20:36:09.774824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:117048 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754e000 len:0x1000 key:0x3b200 00:26:57.130 [2024-05-16 20:36:09.774830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:2600 p:0 m:0 dnr:0 00:26:57.130 [2024-05-16 20:36:09.774838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:117056 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007550000 len:0x1000 key:0x3b200 00:26:57.130 [2024-05-16 20:36:09.774844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:2600 p:0 m:0 dnr:0 00:26:57.130 [2024-05-16 20:36:09.774851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:117064 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007552000 len:0x1000 key:0x3b200 00:26:57.130 [2024-05-16 20:36:09.774857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:2600 p:0 m:0 dnr:0 00:26:57.130 [2024-05-16 20:36:09.774864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:117072 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007554000 len:0x1000 key:0x3b200 00:26:57.130 [2024-05-16 20:36:09.774870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:2600 p:0 m:0 dnr:0 00:26:57.130 [2024-05-16 20:36:09.774877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:117080 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007556000 len:0x1000 key:0x3b200 00:26:57.130 [2024-05-16 20:36:09.774883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:2600 p:0 m:0 dnr:0 00:26:57.130 [2024-05-16 20:36:09.774891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:117088 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007558000 len:0x1000 key:0x3b200 00:26:57.130 [2024-05-16 20:36:09.774897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:2600 p:0 m:0 dnr:0 00:26:57.130 [2024-05-16 20:36:09.774904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:117096 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755a000 len:0x1000 key:0x3b200 00:26:57.130 [2024-05-16 20:36:09.774910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:2600 p:0 m:0 dnr:0 00:26:57.130 [2024-05-16 20:36:09.774919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:117104 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755c000 len:0x1000 key:0x3b200 00:26:57.130 [2024-05-16 20:36:09.774924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:2600 p:0 m:0 dnr:0 00:26:57.130 [2024-05-16 20:36:09.774931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:117112 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755e000 len:0x1000 key:0x3b200 00:26:57.130 [2024-05-16 20:36:09.774937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:2600 p:0 m:0 dnr:0 00:26:57.130 [2024-05-16 20:36:09.774945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:117120 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007560000 len:0x1000 key:0x3b200 00:26:57.130 [2024-05-16 20:36:09.774951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:2600 p:0 m:0 dnr:0 00:26:57.130 [2024-05-16 20:36:09.774958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:117128 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007562000 len:0x1000 key:0x3b200 00:26:57.130 [2024-05-16 20:36:09.774964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:2600 p:0 m:0 dnr:0 00:26:57.130 [2024-05-16 20:36:09.774971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:117136 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007564000 len:0x1000 key:0x3b200 00:26:57.130 [2024-05-16 20:36:09.774977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:2600 p:0 m:0 dnr:0 00:26:57.130 [2024-05-16 20:36:09.774984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:117144 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007566000 len:0x1000 key:0x3b200 00:26:57.130 [2024-05-16 20:36:09.774990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:2600 p:0 m:0 dnr:0 00:26:57.130 [2024-05-16 20:36:09.774998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:117152 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007568000 len:0x1000 key:0x3b200 00:26:57.130 [2024-05-16 20:36:09.775004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:2600 p:0 m:0 dnr:0 00:26:57.130 [2024-05-16 20:36:09.775011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:117160 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756a000 len:0x1000 key:0x3b200 00:26:57.130 [2024-05-16 20:36:09.775017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:2600 p:0 m:0 dnr:0 00:26:57.130 [2024-05-16 20:36:09.775029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:117168 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756c000 len:0x1000 key:0x3b200 00:26:57.130 [2024-05-16 20:36:09.775035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:2600 p:0 m:0 dnr:0 00:26:57.130 [2024-05-16 20:36:09.775042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:117176 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756e000 len:0x1000 key:0x3b200 00:26:57.130 [2024-05-16 20:36:09.775048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:2600 p:0 m:0 dnr:0 00:26:57.130 [2024-05-16 20:36:09.775055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:117184 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007570000 len:0x1000 key:0x3b200 00:26:57.130 [2024-05-16 20:36:09.775061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:2600 p:0 m:0 dnr:0 00:26:57.130 [2024-05-16 20:36:09.775069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:117192 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007572000 len:0x1000 key:0x3b200 00:26:57.130 [2024-05-16 20:36:09.775076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:2600 p:0 m:0 dnr:0 00:26:57.130 [2024-05-16 20:36:09.775084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:117200 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007574000 len:0x1000 key:0x3b200 00:26:57.130 [2024-05-16 20:36:09.775090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:2600 p:0 m:0 dnr:0 00:26:57.130 [2024-05-16 20:36:09.775099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:117208 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007576000 len:0x1000 key:0x3b200 00:26:57.130 [2024-05-16 20:36:09.775105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:2600 p:0 m:0 dnr:0 00:26:57.130 [2024-05-16 20:36:09.775112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:117216 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007578000 len:0x1000 key:0x3b200 00:26:57.130 [2024-05-16 20:36:09.775118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:2600 p:0 m:0 dnr:0 00:26:57.130 [2024-05-16 20:36:09.775125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:117224 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757a000 len:0x1000 key:0x3b200 00:26:57.130 [2024-05-16 20:36:09.775131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:2600 p:0 m:0 dnr:0 00:26:57.130 [2024-05-16 20:36:09.775138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:117232 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757c000 len:0x1000 key:0x3b200 00:26:57.130 [2024-05-16 20:36:09.775144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:2600 p:0 m:0 dnr:0 00:26:57.130 [2024-05-16 20:36:09.775152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:117240 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757e000 len:0x1000 key:0x3b200 00:26:57.130 [2024-05-16 20:36:09.775158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:2600 p:0 m:0 dnr:0 00:26:57.130 [2024-05-16 20:36:09.775165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:117248 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007580000 len:0x1000 key:0x3b200 00:26:57.130 [2024-05-16 20:36:09.775171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:2600 p:0 m:0 dnr:0 00:26:57.131 [2024-05-16 20:36:09.775179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:117256 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007582000 len:0x1000 key:0x3b200 00:26:57.131 [2024-05-16 20:36:09.775185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:2600 p:0 m:0 dnr:0 00:26:57.131 [2024-05-16 20:36:09.775192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:117264 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007584000 len:0x1000 key:0x3b200 00:26:57.131 [2024-05-16 20:36:09.775199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:2600 p:0 m:0 dnr:0 00:26:57.131 [2024-05-16 20:36:09.775206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:117272 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007586000 len:0x1000 key:0x3b200 00:26:57.131 [2024-05-16 20:36:09.775212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:2600 p:0 m:0 dnr:0 00:26:57.131 [2024-05-16 20:36:09.775219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:117280 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007588000 len:0x1000 key:0x3b200 00:26:57.131 [2024-05-16 20:36:09.775227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:2600 p:0 m:0 dnr:0 00:26:57.131 [2024-05-16 20:36:09.775234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:117288 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758a000 len:0x1000 key:0x3b200 00:26:57.131 [2024-05-16 20:36:09.775240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:2600 p:0 m:0 dnr:0 00:26:57.131 [2024-05-16 20:36:09.775247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:117296 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758c000 len:0x1000 key:0x3b200 00:26:57.131 [2024-05-16 20:36:09.775253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:2600 p:0 m:0 dnr:0 00:26:57.131 [2024-05-16 20:36:09.775261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:117304 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758e000 len:0x1000 key:0x3b200 00:26:57.131 [2024-05-16 20:36:09.775267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:2600 p:0 m:0 dnr:0 00:26:57.131 [2024-05-16 20:36:09.775274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:117312 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007590000 len:0x1000 key:0x3b200 00:26:57.131 [2024-05-16 20:36:09.775280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:2600 p:0 m:0 dnr:0 00:26:57.131 [2024-05-16 20:36:09.786285] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:57.131 [2024-05-16 20:36:09.786300] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:57.131 [2024-05-16 20:36:09.786308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:117320 len:8 PRP1 0x0 PRP2 0x0 00:26:57.131 [2024-05-16 20:36:09.786317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.131 [2024-05-16 20:36:09.786355] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2000192e4980 was disconnected and freed. reset controller. 00:26:57.131 [2024-05-16 20:36:09.786384] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:57.131 [2024-05-16 20:36:09.786393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.131 [2024-05-16 20:36:09.786402] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:57.131 [2024-05-16 20:36:09.786410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.131 [2024-05-16 20:36:09.786419] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:57.131 [2024-05-16 20:36:09.786431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.131 [2024-05-16 20:36:09.786440] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:26:57.131 [2024-05-16 20:36:09.786448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.131 [2024-05-16 20:36:09.806278] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:26:57.131 [2024-05-16 20:36:09.806293] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:57.131 [2024-05-16 20:36:09.806299] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:26:57.131 [2024-05-16 20:36:09.809019] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:57.131 [2024-05-16 20:36:09.812276] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:26:57.131 [2024-05-16 20:36:09.812324] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:26:57.131 [2024-05-16 20:36:09.812343] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed040 00:26:58.064 [2024-05-16 20:36:10.816477] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:26:58.064 [2024-05-16 20:36:10.816504] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:58.064 [2024-05-16 20:36:10.816682] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:58.064 [2024-05-16 20:36:10.816690] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:58.064 [2024-05-16 20:36:10.816697] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:26:58.064 [2024-05-16 20:36:10.819487] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:58.064 [2024-05-16 20:36:10.822613] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:58.064 [2024-05-16 20:36:10.825153] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:26:58.064 [2024-05-16 20:36:10.825173] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:26:58.064 [2024-05-16 20:36:10.825179] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed040 00:26:58.997 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 3204713 Killed "${NVMF_APP[@]}" "$@" 00:26:58.997 20:36:11 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:26:58.997 20:36:11 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:26:58.997 20:36:11 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:58.997 20:36:11 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@720 -- # xtrace_disable 00:26:58.997 20:36:11 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:58.997 20:36:11 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=3206511 00:26:58.997 20:36:11 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 3206511 00:26:58.997 20:36:11 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:26:58.997 20:36:11 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@827 -- # '[' -z 3206511 ']' 00:26:58.997 20:36:11 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:58.997 20:36:11 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@832 -- # local max_retries=100 00:26:58.997 20:36:11 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:58.997 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:58.997 20:36:11 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@836 -- # xtrace_disable 00:26:58.997 20:36:11 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:58.997 [2024-05-16 20:36:11.800394] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:26:58.997 [2024-05-16 20:36:11.800438] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:58.997 EAL: No free 2048 kB hugepages reported on node 1 00:26:58.997 [2024-05-16 20:36:11.829197] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:26:58.997 [2024-05-16 20:36:11.829217] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:58.997 [2024-05-16 20:36:11.829398] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:58.997 [2024-05-16 20:36:11.829406] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:58.997 [2024-05-16 20:36:11.829413] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:26:58.997 [2024-05-16 20:36:11.830738] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:26:58.997 [2024-05-16 20:36:11.832206] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:58.997 [2024-05-16 20:36:11.843775] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:58.997 [2024-05-16 20:36:11.846296] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:26:58.997 [2024-05-16 20:36:11.846313] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:26:58.997 [2024-05-16 20:36:11.846319] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed040 00:26:58.997 [2024-05-16 20:36:11.860620] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:58.997 [2024-05-16 20:36:11.933509] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:58.997 [2024-05-16 20:36:11.933548] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:58.997 [2024-05-16 20:36:11.933555] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:58.997 [2024-05-16 20:36:11.933561] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:58.997 [2024-05-16 20:36:11.933566] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:58.997 [2024-05-16 20:36:11.933620] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:58.997 [2024-05-16 20:36:11.933704] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:26:58.997 [2024-05-16 20:36:11.933705] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:59.930 20:36:12 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:26:59.930 20:36:12 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@860 -- # return 0 00:26:59.930 20:36:12 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:59.930 20:36:12 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:59.930 20:36:12 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:59.930 20:36:12 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:59.930 20:36:12 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:26:59.930 20:36:12 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:59.930 20:36:12 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:59.930 [2024-05-16 20:36:12.684570] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x2527110/0x252b600) succeed. 00:26:59.930 [2024-05-16 20:36:12.694694] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x25286b0/0x256cc90) succeed. 00:26:59.930 20:36:12 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:59.930 20:36:12 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:59.930 20:36:12 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:59.930 20:36:12 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:59.930 Malloc0 00:26:59.930 20:36:12 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:59.930 20:36:12 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:59.930 20:36:12 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:59.930 20:36:12 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:59.930 20:36:12 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:59.930 20:36:12 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:59.930 20:36:12 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:59.930 20:36:12 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:59.930 20:36:12 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:59.930 20:36:12 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:26:59.930 20:36:12 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:59.930 20:36:12 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:59.930 [2024-05-16 20:36:12.842929] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:26:59.930 [2024-05-16 20:36:12.843309] rdma.c:3032:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:26:59.930 20:36:12 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:59.930 20:36:12 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 3205356 00:26:59.930 [2024-05-16 20:36:12.850216] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:26:59.930 [2024-05-16 20:36:12.850240] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:59.930 [2024-05-16 20:36:12.850417] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:59.930 [2024-05-16 20:36:12.850430] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:59.930 [2024-05-16 20:36:12.850438] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:26:59.930 [2024-05-16 20:36:12.853210] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:59.930 [2024-05-16 20:36:12.859118] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:59.930 [2024-05-16 20:36:12.905848] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:27:09.895 00:27:09.895 Latency(us) 00:27:09.895 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:09.895 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:27:09.895 Verification LBA range: start 0x0 length 0x4000 00:27:09.895 Nvme1n1 : 15.01 12858.66 50.23 10166.51 0.00 5539.75 370.59 1070546.16 00:27:09.895 =================================================================================================================== 00:27:09.895 Total : 12858.66 50.23 10166.51 0.00 5539.75 370.59 1070546.16 00:27:09.895 20:36:21 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:27:09.895 20:36:21 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:09.895 20:36:21 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:09.895 20:36:21 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:09.895 20:36:21 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:09.895 20:36:21 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:27:09.895 20:36:21 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:27:09.895 20:36:21 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:09.895 20:36:21 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@117 -- # sync 00:27:09.895 20:36:21 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:27:09.895 20:36:21 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:27:09.895 20:36:21 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@120 -- # set +e 00:27:09.895 20:36:21 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:09.895 20:36:21 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:27:09.895 rmmod nvme_rdma 00:27:09.895 rmmod nvme_fabrics 00:27:09.895 20:36:21 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:09.895 20:36:21 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@124 -- # set -e 00:27:09.895 20:36:21 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@125 -- # return 0 00:27:09.895 20:36:21 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@489 -- # '[' -n 3206511 ']' 00:27:09.895 20:36:21 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@490 -- # killprocess 3206511 00:27:09.896 20:36:21 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@946 -- # '[' -z 3206511 ']' 00:27:09.896 20:36:21 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@950 -- # kill -0 3206511 00:27:09.896 20:36:21 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@951 -- # uname 00:27:09.896 20:36:21 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:09.896 20:36:21 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3206511 00:27:09.896 20:36:21 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:27:09.896 20:36:21 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:27:09.896 20:36:21 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3206511' 00:27:09.896 killing process with pid 3206511 00:27:09.896 20:36:21 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@965 -- # kill 3206511 00:27:09.896 [2024-05-16 20:36:21.434085] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:27:09.896 20:36:21 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@970 -- # wait 3206511 00:27:09.896 [2024-05-16 20:36:21.504965] rdma.c:2885:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:27:09.896 20:36:21 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:09.896 20:36:21 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:27:09.896 00:27:09.896 real 0m25.097s 00:27:09.896 user 1m4.438s 00:27:09.896 sys 0m5.841s 00:27:09.896 20:36:21 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:27:09.896 20:36:21 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:09.896 ************************************ 00:27:09.896 END TEST nvmf_bdevperf 00:27:09.896 ************************************ 00:27:09.896 20:36:21 nvmf_rdma -- nvmf/nvmf.sh@122 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=rdma 00:27:09.896 20:36:21 nvmf_rdma -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:27:09.896 20:36:21 nvmf_rdma -- common/autotest_common.sh@1103 -- # xtrace_disable 00:27:09.896 20:36:21 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:27:09.896 ************************************ 00:27:09.896 START TEST nvmf_target_disconnect 00:27:09.896 ************************************ 00:27:09.896 20:36:21 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=rdma 00:27:09.896 * Looking for test storage... 00:27:09.896 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:27:09.896 20:36:21 nvmf_rdma.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:27:09.896 20:36:21 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:27:09.896 20:36:21 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:09.896 20:36:21 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:09.896 20:36:21 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:09.896 20:36:21 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:09.896 20:36:21 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:09.896 20:36:21 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:09.896 20:36:21 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:09.896 20:36:21 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:09.896 20:36:21 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:09.896 20:36:21 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:09.896 20:36:21 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:27:09.896 20:36:21 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:27:09.896 20:36:21 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:09.896 20:36:21 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:09.896 20:36:21 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:09.896 20:36:21 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:09.896 20:36:21 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:27:09.896 20:36:21 nvmf_rdma.nvmf_target_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:09.896 20:36:21 nvmf_rdma.nvmf_target_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:09.896 20:36:21 nvmf_rdma.nvmf_target_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:09.896 20:36:21 nvmf_rdma.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:09.896 20:36:21 nvmf_rdma.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:09.896 20:36:21 nvmf_rdma.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:09.896 20:36:21 nvmf_rdma.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:27:09.896 20:36:21 nvmf_rdma.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:09.896 20:36:21 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@47 -- # : 0 00:27:09.896 20:36:21 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:09.896 20:36:21 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:09.896 20:36:21 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:09.896 20:36:21 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:09.896 20:36:21 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:09.896 20:36:21 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:09.896 20:36:21 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:09.896 20:36:21 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:09.896 20:36:21 nvmf_rdma.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme 00:27:09.896 20:36:21 nvmf_rdma.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:27:09.896 20:36:21 nvmf_rdma.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:27:09.896 20:36:21 nvmf_rdma.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:27:09.896 20:36:21 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:27:09.896 20:36:21 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:09.896 20:36:21 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:09.896 20:36:21 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:09.896 20:36:21 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:09.896 20:36:21 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:09.896 20:36:21 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:09.896 20:36:21 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:09.896 20:36:21 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:09.896 20:36:21 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:09.896 20:36:21 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:27:09.896 20:36:21 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:27:15.163 20:36:27 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:15.163 20:36:27 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:27:15.163 20:36:27 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:15.163 20:36:27 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:15.163 20:36:27 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:15.163 20:36:27 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:15.163 20:36:27 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:15.163 20:36:27 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:27:15.163 20:36:27 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:15.163 20:36:27 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@296 -- # e810=() 00:27:15.163 20:36:27 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:27:15.163 20:36:27 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@297 -- # x722=() 00:27:15.163 20:36:27 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:27:15.163 20:36:27 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:27:15.163 20:36:27 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:27:15.163 20:36:27 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:15.163 20:36:27 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:15.163 20:36:27 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:15.163 20:36:27 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:15.163 20:36:27 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:15.163 20:36:27 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:15.163 20:36:27 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:15.163 20:36:27 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:15.163 20:36:27 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:15.163 20:36:27 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:15.163 20:36:27 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:15.163 20:36:27 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:15.163 20:36:27 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:27:15.163 20:36:27 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:27:15.163 20:36:27 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:27:15.163 20:36:27 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:27:15.163 20:36:27 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:27:15.163 20:36:27 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:15.163 20:36:27 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:15.163 20:36:27 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:27:15.163 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:27:15.163 20:36:27 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:27:15.163 20:36:27 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:27:15.163 20:36:27 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:27:15.163 20:36:27 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:27:15.163 20:36:27 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:27:15.163 20:36:27 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:27:15.163 20:36:27 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:15.163 20:36:27 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:27:15.163 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:27:15.163 20:36:27 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:27:15.163 20:36:27 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:27:15.163 20:36:27 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:27:15.163 20:36:27 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:27:15.163 20:36:27 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:27:15.163 20:36:27 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:27:15.163 20:36:27 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:15.163 20:36:27 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:27:15.163 20:36:27 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:15.163 20:36:27 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:15.163 20:36:27 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:27:15.163 20:36:27 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:15.163 20:36:27 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:15.163 20:36:27 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:27:15.163 Found net devices under 0000:da:00.0: mlx_0_0 00:27:15.163 20:36:27 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:15.163 20:36:27 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:15.163 20:36:27 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:15.163 20:36:27 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:27:15.163 20:36:27 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:15.163 20:36:27 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:15.163 20:36:27 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:27:15.163 Found net devices under 0000:da:00.1: mlx_0_1 00:27:15.163 20:36:27 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:15.163 20:36:27 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:15.163 20:36:27 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:27:15.163 20:36:27 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:15.163 20:36:27 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:27:15.163 20:36:27 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:27:15.163 20:36:27 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@420 -- # rdma_device_init 00:27:15.163 20:36:27 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:27:15.163 20:36:27 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@58 -- # uname 00:27:15.163 20:36:27 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:27:15.163 20:36:27 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@62 -- # modprobe ib_cm 00:27:15.163 20:36:27 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@63 -- # modprobe ib_core 00:27:15.163 20:36:27 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@64 -- # modprobe ib_umad 00:27:15.163 20:36:27 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:27:15.163 20:36:27 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@66 -- # modprobe iw_cm 00:27:15.163 20:36:27 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:27:15.163 20:36:27 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:27:15.163 20:36:27 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@502 -- # allocate_nic_ips 00:27:15.163 20:36:27 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:27:15.163 20:36:27 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@73 -- # get_rdma_if_list 00:27:15.163 20:36:27 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:27:15.163 20:36:27 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:27:15.164 20:36:27 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:27:15.164 20:36:27 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:27:15.164 20:36:27 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:27:15.164 20:36:27 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:27:15.164 20:36:27 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:15.164 20:36:27 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:27:15.164 20:36:27 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@104 -- # echo mlx_0_0 00:27:15.164 20:36:27 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@105 -- # continue 2 00:27:15.164 20:36:27 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:27:15.164 20:36:27 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:15.164 20:36:27 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:27:15.164 20:36:27 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:15.164 20:36:27 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:27:15.164 20:36:27 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@104 -- # echo mlx_0_1 00:27:15.164 20:36:27 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@105 -- # continue 2 00:27:15.164 20:36:27 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:27:15.164 20:36:27 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:27:15.164 20:36:27 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:27:15.164 20:36:27 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:27:15.164 20:36:27 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@113 -- # awk '{print $4}' 00:27:15.164 20:36:27 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@113 -- # cut -d/ -f1 00:27:15.164 20:36:27 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:27:15.164 20:36:27 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:27:15.164 20:36:27 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:27:15.164 262: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:27:15.164 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:27:15.164 altname enp218s0f0np0 00:27:15.164 altname ens818f0np0 00:27:15.164 inet 192.168.100.8/24 scope global mlx_0_0 00:27:15.164 valid_lft forever preferred_lft forever 00:27:15.164 20:36:27 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:27:15.164 20:36:27 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:27:15.164 20:36:27 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:27:15.164 20:36:27 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:27:15.164 20:36:27 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@113 -- # awk '{print $4}' 00:27:15.164 20:36:27 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@113 -- # cut -d/ -f1 00:27:15.164 20:36:27 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:27:15.164 20:36:27 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:27:15.164 20:36:27 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:27:15.164 263: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:27:15.164 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:27:15.164 altname enp218s0f1np1 00:27:15.164 altname ens818f1np1 00:27:15.164 inet 192.168.100.9/24 scope global mlx_0_1 00:27:15.164 valid_lft forever preferred_lft forever 00:27:15.164 20:36:27 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@422 -- # return 0 00:27:15.164 20:36:27 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:15.164 20:36:27 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:27:15.164 20:36:27 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:27:15.164 20:36:27 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:27:15.164 20:36:27 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@86 -- # get_rdma_if_list 00:27:15.164 20:36:27 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:27:15.164 20:36:27 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:27:15.164 20:36:27 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:27:15.164 20:36:27 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:27:15.164 20:36:27 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:27:15.164 20:36:27 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:27:15.164 20:36:27 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:15.164 20:36:27 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:27:15.164 20:36:27 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@104 -- # echo mlx_0_0 00:27:15.164 20:36:27 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@105 -- # continue 2 00:27:15.164 20:36:27 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:27:15.164 20:36:27 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:15.164 20:36:27 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:27:15.164 20:36:27 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:15.164 20:36:27 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:27:15.164 20:36:27 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@104 -- # echo mlx_0_1 00:27:15.164 20:36:27 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@105 -- # continue 2 00:27:15.164 20:36:27 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:27:15.164 20:36:27 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:27:15.164 20:36:27 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:27:15.164 20:36:27 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:27:15.164 20:36:27 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@113 -- # awk '{print $4}' 00:27:15.164 20:36:27 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@113 -- # cut -d/ -f1 00:27:15.164 20:36:27 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:27:15.164 20:36:27 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:27:15.164 20:36:27 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:27:15.164 20:36:27 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:27:15.164 20:36:27 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@113 -- # awk '{print $4}' 00:27:15.164 20:36:27 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@113 -- # cut -d/ -f1 00:27:15.164 20:36:27 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:27:15.164 192.168.100.9' 00:27:15.164 20:36:27 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:27:15.164 192.168.100.9' 00:27:15.164 20:36:27 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@457 -- # head -n 1 00:27:15.164 20:36:27 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:27:15.164 20:36:27 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:27:15.164 192.168.100.9' 00:27:15.164 20:36:27 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@458 -- # tail -n +2 00:27:15.164 20:36:27 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@458 -- # head -n 1 00:27:15.164 20:36:27 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:27:15.164 20:36:27 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:27:15.164 20:36:27 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:27:15.164 20:36:27 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:27:15.164 20:36:27 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:27:15.164 20:36:27 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:27:15.164 20:36:27 nvmf_rdma.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:27:15.164 20:36:27 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:27:15.164 20:36:27 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@1103 -- # xtrace_disable 00:27:15.164 20:36:27 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:27:15.164 ************************************ 00:27:15.164 START TEST nvmf_target_disconnect_tc1 00:27:15.164 ************************************ 00:27:15.164 20:36:28 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1121 -- # nvmf_target_disconnect_tc1 00:27:15.164 20:36:28 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:27:15.164 20:36:28 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@648 -- # local es=0 00:27:15.164 20:36:28 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:27:15.164 20:36:28 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect 00:27:15.164 20:36:28 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:15.164 20:36:28 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect 00:27:15.164 20:36:28 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:15.164 20:36:28 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect 00:27:15.164 20:36:28 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:15.164 20:36:28 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect 00:27:15.164 20:36:28 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect ]] 00:27:15.164 20:36:28 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:27:15.164 EAL: No free 2048 kB hugepages reported on node 1 00:27:15.164 [2024-05-16 20:36:28.129018] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:27:15.164 [2024-05-16 20:36:28.129063] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:27:15.164 [2024-05-16 20:36:28.129074] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d7040 00:27:16.534 [2024-05-16 20:36:29.133024] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:27:16.534 [2024-05-16 20:36:29.133076] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:27:16.534 [2024-05-16 20:36:29.133102] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr is in error state 00:27:16.534 [2024-05-16 20:36:29.133150] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:27:16.534 [2024-05-16 20:36:29.133158] nvme.c: 898:spdk_nvme_probe: *ERROR*: Create probe context failed 00:27:16.534 spdk_nvme_probe() failed for transport address '192.168.100.8' 00:27:16.534 /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:27:16.534 Initializing NVMe Controllers 00:27:16.534 20:36:29 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # es=1 00:27:16.534 20:36:29 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:16.534 20:36:29 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:16.534 20:36:29 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:16.534 00:27:16.534 real 0m1.113s 00:27:16.534 user 0m0.954s 00:27:16.534 sys 0m0.148s 00:27:16.534 20:36:29 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:27:16.534 20:36:29 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:16.534 ************************************ 00:27:16.534 END TEST nvmf_target_disconnect_tc1 00:27:16.534 ************************************ 00:27:16.534 20:36:29 nvmf_rdma.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:27:16.534 20:36:29 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:27:16.534 20:36:29 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@1103 -- # xtrace_disable 00:27:16.534 20:36:29 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:27:16.534 ************************************ 00:27:16.534 START TEST nvmf_target_disconnect_tc2 00:27:16.534 ************************************ 00:27:16.534 20:36:29 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1121 -- # nvmf_target_disconnect_tc2 00:27:16.534 20:36:29 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 192.168.100.8 00:27:16.534 20:36:29 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:27:16.534 20:36:29 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:16.534 20:36:29 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@720 -- # xtrace_disable 00:27:16.534 20:36:29 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:16.534 20:36:29 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=3211733 00:27:16.534 20:36:29 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 3211733 00:27:16.534 20:36:29 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@827 -- # '[' -z 3211733 ']' 00:27:16.534 20:36:29 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:16.534 20:36:29 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:16.534 20:36:29 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:16.534 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:16.534 20:36:29 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:16.534 20:36:29 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:16.534 20:36:29 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:27:16.534 [2024-05-16 20:36:29.258055] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:27:16.534 [2024-05-16 20:36:29.258091] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:16.534 EAL: No free 2048 kB hugepages reported on node 1 00:27:16.534 [2024-05-16 20:36:29.330867] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:16.534 [2024-05-16 20:36:29.408985] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:16.534 [2024-05-16 20:36:29.409021] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:16.534 [2024-05-16 20:36:29.409027] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:16.534 [2024-05-16 20:36:29.409033] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:16.534 [2024-05-16 20:36:29.409038] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:16.534 [2024-05-16 20:36:29.409170] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:27:16.534 [2024-05-16 20:36:29.409729] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:27:16.535 [2024-05-16 20:36:29.409826] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:27:16.535 [2024-05-16 20:36:29.409827] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:27:17.099 20:36:30 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:17.099 20:36:30 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # return 0 00:27:17.099 20:36:30 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:17.099 20:36:30 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:17.099 20:36:30 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:17.099 20:36:30 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:17.099 20:36:30 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:17.099 20:36:30 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:17.099 20:36:30 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:17.357 Malloc0 00:27:17.357 20:36:30 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:17.357 20:36:30 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:27:17.357 20:36:30 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:17.357 20:36:30 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:17.357 [2024-05-16 20:36:30.130622] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1cd7a10/0x1ce36e0) succeed. 00:27:17.357 [2024-05-16 20:36:30.140938] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1cd9050/0x1d83790) succeed. 00:27:17.357 20:36:30 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:17.357 20:36:30 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:17.357 20:36:30 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:17.358 20:36:30 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:17.358 20:36:30 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:17.358 20:36:30 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:17.358 20:36:30 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:17.358 20:36:30 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:17.358 20:36:30 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:17.358 20:36:30 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:27:17.358 20:36:30 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:17.358 20:36:30 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:17.358 [2024-05-16 20:36:30.278672] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:27:17.358 [2024-05-16 20:36:30.279031] rdma.c:3032:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:27:17.358 20:36:30 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:17.358 20:36:30 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:27:17.358 20:36:30 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:17.358 20:36:30 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:17.358 20:36:30 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:17.358 20:36:30 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=3211979 00:27:17.358 20:36:30 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:27:17.358 20:36:30 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:27:17.358 EAL: No free 2048 kB hugepages reported on node 1 00:27:19.884 20:36:32 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 3211733 00:27:19.884 20:36:32 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:27:20.815 Write completed with error (sct=0, sc=8) 00:27:20.815 starting I/O failed 00:27:20.815 Write completed with error (sct=0, sc=8) 00:27:20.815 starting I/O failed 00:27:20.815 Read completed with error (sct=0, sc=8) 00:27:20.815 starting I/O failed 00:27:20.815 Write completed with error (sct=0, sc=8) 00:27:20.815 starting I/O failed 00:27:20.815 Read completed with error (sct=0, sc=8) 00:27:20.815 starting I/O failed 00:27:20.815 Write completed with error (sct=0, sc=8) 00:27:20.815 starting I/O failed 00:27:20.815 Write completed with error (sct=0, sc=8) 00:27:20.815 starting I/O failed 00:27:20.815 Write completed with error (sct=0, sc=8) 00:27:20.815 starting I/O failed 00:27:20.815 Read completed with error (sct=0, sc=8) 00:27:20.815 starting I/O failed 00:27:20.815 Write completed with error (sct=0, sc=8) 00:27:20.815 starting I/O failed 00:27:20.815 Write completed with error (sct=0, sc=8) 00:27:20.815 starting I/O failed 00:27:20.815 Read completed with error (sct=0, sc=8) 00:27:20.815 starting I/O failed 00:27:20.815 Write completed with error (sct=0, sc=8) 00:27:20.815 starting I/O failed 00:27:20.815 Write completed with error (sct=0, sc=8) 00:27:20.815 starting I/O failed 00:27:20.815 Write completed with error (sct=0, sc=8) 00:27:20.815 starting I/O failed 00:27:20.815 Read completed with error (sct=0, sc=8) 00:27:20.815 starting I/O failed 00:27:20.815 Write completed with error (sct=0, sc=8) 00:27:20.815 starting I/O failed 00:27:20.815 Write completed with error (sct=0, sc=8) 00:27:20.815 starting I/O failed 00:27:20.815 Write completed with error (sct=0, sc=8) 00:27:20.815 starting I/O failed 00:27:20.815 Write completed with error (sct=0, sc=8) 00:27:20.815 starting I/O failed 00:27:20.815 Write completed with error (sct=0, sc=8) 00:27:20.815 starting I/O failed 00:27:20.815 Read completed with error (sct=0, sc=8) 00:27:20.815 starting I/O failed 00:27:20.815 Read completed with error (sct=0, sc=8) 00:27:20.815 starting I/O failed 00:27:20.815 Read completed with error (sct=0, sc=8) 00:27:20.815 starting I/O failed 00:27:20.815 Write completed with error (sct=0, sc=8) 00:27:20.815 starting I/O failed 00:27:20.815 Read completed with error (sct=0, sc=8) 00:27:20.815 starting I/O failed 00:27:20.815 Write completed with error (sct=0, sc=8) 00:27:20.815 starting I/O failed 00:27:20.815 Write completed with error (sct=0, sc=8) 00:27:20.815 starting I/O failed 00:27:20.816 Write completed with error (sct=0, sc=8) 00:27:20.816 starting I/O failed 00:27:20.816 Read completed with error (sct=0, sc=8) 00:27:20.816 starting I/O failed 00:27:20.816 Read completed with error (sct=0, sc=8) 00:27:20.816 starting I/O failed 00:27:20.816 Write completed with error (sct=0, sc=8) 00:27:20.816 starting I/O failed 00:27:20.816 [2024-05-16 20:36:33.463163] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:21.380 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 3211733 Killed "${NVMF_APP[@]}" "$@" 00:27:21.380 20:36:34 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 192.168.100.8 00:27:21.380 20:36:34 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:27:21.380 20:36:34 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:21.380 20:36:34 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@720 -- # xtrace_disable 00:27:21.380 20:36:34 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:21.380 20:36:34 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=3212671 00:27:21.380 20:36:34 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 3212671 00:27:21.380 20:36:34 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:27:21.380 20:36:34 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@827 -- # '[' -z 3212671 ']' 00:27:21.380 20:36:34 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:21.380 20:36:34 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:21.380 20:36:34 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:21.380 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:21.380 20:36:34 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:21.380 20:36:34 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:21.380 [2024-05-16 20:36:34.349098] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:27:21.380 [2024-05-16 20:36:34.349147] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:21.639 EAL: No free 2048 kB hugepages reported on node 1 00:27:21.639 [2024-05-16 20:36:34.422960] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:21.639 Write completed with error (sct=0, sc=8) 00:27:21.639 starting I/O failed 00:27:21.639 Write completed with error (sct=0, sc=8) 00:27:21.639 starting I/O failed 00:27:21.639 Write completed with error (sct=0, sc=8) 00:27:21.639 starting I/O failed 00:27:21.639 Write completed with error (sct=0, sc=8) 00:27:21.639 starting I/O failed 00:27:21.639 Write completed with error (sct=0, sc=8) 00:27:21.639 starting I/O failed 00:27:21.639 Read completed with error (sct=0, sc=8) 00:27:21.639 starting I/O failed 00:27:21.639 Read completed with error (sct=0, sc=8) 00:27:21.639 starting I/O failed 00:27:21.639 Read completed with error (sct=0, sc=8) 00:27:21.639 starting I/O failed 00:27:21.639 Write completed with error (sct=0, sc=8) 00:27:21.639 starting I/O failed 00:27:21.639 Read completed with error (sct=0, sc=8) 00:27:21.639 starting I/O failed 00:27:21.639 Read completed with error (sct=0, sc=8) 00:27:21.639 starting I/O failed 00:27:21.639 Read completed with error (sct=0, sc=8) 00:27:21.639 starting I/O failed 00:27:21.639 Write completed with error (sct=0, sc=8) 00:27:21.639 starting I/O failed 00:27:21.639 Read completed with error (sct=0, sc=8) 00:27:21.639 starting I/O failed 00:27:21.639 Write completed with error (sct=0, sc=8) 00:27:21.639 starting I/O failed 00:27:21.639 Write completed with error (sct=0, sc=8) 00:27:21.639 starting I/O failed 00:27:21.639 Read completed with error (sct=0, sc=8) 00:27:21.639 starting I/O failed 00:27:21.639 Read completed with error (sct=0, sc=8) 00:27:21.639 starting I/O failed 00:27:21.639 Read completed with error (sct=0, sc=8) 00:27:21.639 starting I/O failed 00:27:21.639 Read completed with error (sct=0, sc=8) 00:27:21.639 starting I/O failed 00:27:21.639 Write completed with error (sct=0, sc=8) 00:27:21.639 starting I/O failed 00:27:21.639 Write completed with error (sct=0, sc=8) 00:27:21.639 starting I/O failed 00:27:21.639 Write completed with error (sct=0, sc=8) 00:27:21.639 starting I/O failed 00:27:21.639 Read completed with error (sct=0, sc=8) 00:27:21.639 starting I/O failed 00:27:21.639 Read completed with error (sct=0, sc=8) 00:27:21.639 starting I/O failed 00:27:21.639 Write completed with error (sct=0, sc=8) 00:27:21.639 starting I/O failed 00:27:21.639 Read completed with error (sct=0, sc=8) 00:27:21.639 starting I/O failed 00:27:21.639 Read completed with error (sct=0, sc=8) 00:27:21.639 starting I/O failed 00:27:21.639 Write completed with error (sct=0, sc=8) 00:27:21.639 starting I/O failed 00:27:21.639 Read completed with error (sct=0, sc=8) 00:27:21.639 starting I/O failed 00:27:21.639 Read completed with error (sct=0, sc=8) 00:27:21.639 starting I/O failed 00:27:21.639 Read completed with error (sct=0, sc=8) 00:27:21.639 starting I/O failed 00:27:21.639 [2024-05-16 20:36:34.468209] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.639 [2024-05-16 20:36:34.469913] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:27:21.639 [2024-05-16 20:36:34.469931] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:27:21.639 [2024-05-16 20:36:34.469937] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:21.639 [2024-05-16 20:36:34.494139] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:21.639 [2024-05-16 20:36:34.494173] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:21.639 [2024-05-16 20:36:34.494180] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:21.639 [2024-05-16 20:36:34.494185] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:21.639 [2024-05-16 20:36:34.494190] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:21.639 [2024-05-16 20:36:34.494314] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:27:21.639 [2024-05-16 20:36:34.494439] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:27:21.639 [2024-05-16 20:36:34.494530] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:27:21.639 [2024-05-16 20:36:34.494531] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:27:22.204 20:36:35 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:22.204 20:36:35 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # return 0 00:27:22.204 20:36:35 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:22.204 20:36:35 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:22.204 20:36:35 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:22.204 20:36:35 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:22.204 20:36:35 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:22.204 20:36:35 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:22.204 20:36:35 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:22.462 Malloc0 00:27:22.462 20:36:35 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:22.462 20:36:35 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:27:22.462 20:36:35 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:22.462 20:36:35 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:22.462 [2024-05-16 20:36:35.234339] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x18e0a10/0x18ec6e0) succeed. 00:27:22.462 [2024-05-16 20:36:35.244701] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x18e2050/0x198c790) succeed. 00:27:22.462 20:36:35 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:22.462 20:36:35 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:22.462 20:36:35 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:22.462 20:36:35 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:22.462 20:36:35 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:22.462 20:36:35 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:22.462 20:36:35 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:22.462 20:36:35 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:22.462 20:36:35 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:22.462 20:36:35 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:27:22.462 20:36:35 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:22.462 20:36:35 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:22.462 [2024-05-16 20:36:35.383346] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:27:22.462 [2024-05-16 20:36:35.383737] rdma.c:3032:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:27:22.462 20:36:35 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:22.462 20:36:35 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:27:22.462 20:36:35 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:22.462 20:36:35 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:22.463 20:36:35 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:22.463 20:36:35 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 3211979 00:27:22.721 [2024-05-16 20:36:35.474057] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.721 qpair failed and we were unable to recover it. 00:27:22.721 [2024-05-16 20:36:35.481647] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.721 [2024-05-16 20:36:35.481701] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.721 [2024-05-16 20:36:35.481719] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.721 [2024-05-16 20:36:35.481726] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.721 [2024-05-16 20:36:35.481733] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:22.721 [2024-05-16 20:36:35.492458] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.721 qpair failed and we were unable to recover it. 00:27:22.721 [2024-05-16 20:36:35.501818] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.721 [2024-05-16 20:36:35.501859] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.721 [2024-05-16 20:36:35.501876] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.721 [2024-05-16 20:36:35.501883] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.721 [2024-05-16 20:36:35.501889] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:22.721 [2024-05-16 20:36:35.512413] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.721 qpair failed and we were unable to recover it. 00:27:22.721 [2024-05-16 20:36:35.521942] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.721 [2024-05-16 20:36:35.521973] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.721 [2024-05-16 20:36:35.521988] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.721 [2024-05-16 20:36:35.521995] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.721 [2024-05-16 20:36:35.522003] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:22.721 [2024-05-16 20:36:35.532230] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.721 qpair failed and we were unable to recover it. 00:27:22.721 [2024-05-16 20:36:35.541879] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.721 [2024-05-16 20:36:35.541922] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.721 [2024-05-16 20:36:35.541936] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.721 [2024-05-16 20:36:35.541943] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.721 [2024-05-16 20:36:35.541949] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:22.721 [2024-05-16 20:36:35.552146] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.721 qpair failed and we were unable to recover it. 00:27:22.721 [2024-05-16 20:36:35.561941] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.721 [2024-05-16 20:36:35.561981] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.721 [2024-05-16 20:36:35.561996] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.721 [2024-05-16 20:36:35.562002] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.721 [2024-05-16 20:36:35.562008] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:22.721 [2024-05-16 20:36:35.572444] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.721 qpair failed and we were unable to recover it. 00:27:22.721 [2024-05-16 20:36:35.581905] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.721 [2024-05-16 20:36:35.581943] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.721 [2024-05-16 20:36:35.581956] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.721 [2024-05-16 20:36:35.581963] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.721 [2024-05-16 20:36:35.581969] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:22.721 [2024-05-16 20:36:35.592468] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.721 qpair failed and we were unable to recover it. 00:27:22.721 [2024-05-16 20:36:35.601994] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.721 [2024-05-16 20:36:35.602026] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.721 [2024-05-16 20:36:35.602041] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.721 [2024-05-16 20:36:35.602048] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.721 [2024-05-16 20:36:35.602053] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:22.721 [2024-05-16 20:36:35.612294] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.721 qpair failed and we were unable to recover it. 00:27:22.721 [2024-05-16 20:36:35.622027] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.721 [2024-05-16 20:36:35.622065] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.721 [2024-05-16 20:36:35.622080] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.721 [2024-05-16 20:36:35.622086] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.721 [2024-05-16 20:36:35.622092] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:22.721 [2024-05-16 20:36:35.632387] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.721 qpair failed and we were unable to recover it. 00:27:22.721 [2024-05-16 20:36:35.642173] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.721 [2024-05-16 20:36:35.642218] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.721 [2024-05-16 20:36:35.642233] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.721 [2024-05-16 20:36:35.642239] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.721 [2024-05-16 20:36:35.642245] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:22.721 [2024-05-16 20:36:35.652462] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.721 qpair failed and we were unable to recover it. 00:27:22.721 [2024-05-16 20:36:35.662166] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.721 [2024-05-16 20:36:35.662202] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.721 [2024-05-16 20:36:35.662216] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.721 [2024-05-16 20:36:35.662222] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.721 [2024-05-16 20:36:35.662228] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:22.721 [2024-05-16 20:36:35.672487] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.721 qpair failed and we were unable to recover it. 00:27:22.721 [2024-05-16 20:36:35.682282] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.721 [2024-05-16 20:36:35.682314] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.721 [2024-05-16 20:36:35.682327] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.721 [2024-05-16 20:36:35.682334] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.721 [2024-05-16 20:36:35.682340] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:22.721 [2024-05-16 20:36:35.692771] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.721 qpair failed and we were unable to recover it. 00:27:22.721 [2024-05-16 20:36:35.702370] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.722 [2024-05-16 20:36:35.702410] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.722 [2024-05-16 20:36:35.702433] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.722 [2024-05-16 20:36:35.702443] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.722 [2024-05-16 20:36:35.702449] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:22.722 [2024-05-16 20:36:35.712693] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.722 qpair failed and we were unable to recover it. 00:27:22.980 [2024-05-16 20:36:35.722391] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.980 [2024-05-16 20:36:35.722435] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.980 [2024-05-16 20:36:35.722454] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.980 [2024-05-16 20:36:35.722461] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.980 [2024-05-16 20:36:35.722466] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:22.980 [2024-05-16 20:36:35.732858] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.980 qpair failed and we were unable to recover it. 00:27:22.980 [2024-05-16 20:36:35.742513] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.980 [2024-05-16 20:36:35.742546] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.980 [2024-05-16 20:36:35.742560] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.980 [2024-05-16 20:36:35.742567] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.980 [2024-05-16 20:36:35.742573] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:22.980 [2024-05-16 20:36:35.752672] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.980 qpair failed and we were unable to recover it. 00:27:22.980 [2024-05-16 20:36:35.762470] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.980 [2024-05-16 20:36:35.762511] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.980 [2024-05-16 20:36:35.762526] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.980 [2024-05-16 20:36:35.762533] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.980 [2024-05-16 20:36:35.762539] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:22.980 [2024-05-16 20:36:35.772926] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.980 qpair failed and we were unable to recover it. 00:27:22.980 [2024-05-16 20:36:35.782546] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.980 [2024-05-16 20:36:35.782585] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.980 [2024-05-16 20:36:35.782599] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.980 [2024-05-16 20:36:35.782606] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.980 [2024-05-16 20:36:35.782612] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:22.980 [2024-05-16 20:36:35.792914] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.980 qpair failed and we were unable to recover it. 00:27:22.980 [2024-05-16 20:36:35.802584] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.980 [2024-05-16 20:36:35.802625] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.980 [2024-05-16 20:36:35.802641] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.980 [2024-05-16 20:36:35.802648] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.980 [2024-05-16 20:36:35.802653] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:22.980 [2024-05-16 20:36:35.813083] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.980 qpair failed and we were unable to recover it. 00:27:22.980 [2024-05-16 20:36:35.822669] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.980 [2024-05-16 20:36:35.822705] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.980 [2024-05-16 20:36:35.822719] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.980 [2024-05-16 20:36:35.822726] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.980 [2024-05-16 20:36:35.822731] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:22.980 [2024-05-16 20:36:35.832949] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.980 qpair failed and we were unable to recover it. 00:27:22.980 [2024-05-16 20:36:35.842713] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.980 [2024-05-16 20:36:35.842747] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.980 [2024-05-16 20:36:35.842762] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.980 [2024-05-16 20:36:35.842768] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.980 [2024-05-16 20:36:35.842774] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:22.980 [2024-05-16 20:36:35.853173] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.980 qpair failed and we were unable to recover it. 00:27:22.980 [2024-05-16 20:36:35.862906] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.980 [2024-05-16 20:36:35.862942] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.980 [2024-05-16 20:36:35.862956] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.980 [2024-05-16 20:36:35.862962] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.980 [2024-05-16 20:36:35.862968] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:22.980 [2024-05-16 20:36:35.873265] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.980 qpair failed and we were unable to recover it. 00:27:22.980 [2024-05-16 20:36:35.882905] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.980 [2024-05-16 20:36:35.882942] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.980 [2024-05-16 20:36:35.882959] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.980 [2024-05-16 20:36:35.882966] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.980 [2024-05-16 20:36:35.882971] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:22.980 [2024-05-16 20:36:35.893179] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.980 qpair failed and we were unable to recover it. 00:27:22.980 [2024-05-16 20:36:35.902921] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.980 [2024-05-16 20:36:35.902960] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.980 [2024-05-16 20:36:35.902975] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.980 [2024-05-16 20:36:35.902981] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.980 [2024-05-16 20:36:35.902987] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:22.980 [2024-05-16 20:36:35.913392] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.980 qpair failed and we were unable to recover it. 00:27:22.980 [2024-05-16 20:36:35.923020] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.980 [2024-05-16 20:36:35.923060] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.980 [2024-05-16 20:36:35.923073] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.980 [2024-05-16 20:36:35.923080] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.980 [2024-05-16 20:36:35.923085] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:22.980 [2024-05-16 20:36:35.933413] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.980 qpair failed and we were unable to recover it. 00:27:22.980 [2024-05-16 20:36:35.943092] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.981 [2024-05-16 20:36:35.943133] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.981 [2024-05-16 20:36:35.943146] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.981 [2024-05-16 20:36:35.943152] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.981 [2024-05-16 20:36:35.943158] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:22.981 [2024-05-16 20:36:35.953383] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.981 qpair failed and we were unable to recover it. 00:27:22.981 [2024-05-16 20:36:35.963100] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.981 [2024-05-16 20:36:35.963144] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.981 [2024-05-16 20:36:35.963157] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.981 [2024-05-16 20:36:35.963164] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.981 [2024-05-16 20:36:35.963172] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:23.240 [2024-05-16 20:36:35.973405] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.240 qpair failed and we were unable to recover it. 00:27:23.240 [2024-05-16 20:36:35.983192] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.240 [2024-05-16 20:36:35.983225] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.240 [2024-05-16 20:36:35.983243] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.240 [2024-05-16 20:36:35.983250] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.240 [2024-05-16 20:36:35.983256] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:23.240 [2024-05-16 20:36:35.993561] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.240 qpair failed and we were unable to recover it. 00:27:23.240 [2024-05-16 20:36:36.003309] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.240 [2024-05-16 20:36:36.003350] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.240 [2024-05-16 20:36:36.003367] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.240 [2024-05-16 20:36:36.003373] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.240 [2024-05-16 20:36:36.003379] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:23.240 [2024-05-16 20:36:36.013635] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.240 qpair failed and we were unable to recover it. 00:27:23.240 [2024-05-16 20:36:36.023316] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.240 [2024-05-16 20:36:36.023353] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.240 [2024-05-16 20:36:36.023368] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.240 [2024-05-16 20:36:36.023374] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.240 [2024-05-16 20:36:36.023380] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:23.240 [2024-05-16 20:36:36.033675] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.240 qpair failed and we were unable to recover it. 00:27:23.240 [2024-05-16 20:36:36.043379] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.240 [2024-05-16 20:36:36.043427] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.240 [2024-05-16 20:36:36.043442] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.240 [2024-05-16 20:36:36.043448] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.240 [2024-05-16 20:36:36.043454] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:23.240 [2024-05-16 20:36:36.053567] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.241 qpair failed and we were unable to recover it. 00:27:23.241 [2024-05-16 20:36:36.063393] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.241 [2024-05-16 20:36:36.063431] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.241 [2024-05-16 20:36:36.063446] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.241 [2024-05-16 20:36:36.063452] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.241 [2024-05-16 20:36:36.063458] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:23.241 [2024-05-16 20:36:36.073752] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.241 qpair failed and we were unable to recover it. 00:27:23.241 [2024-05-16 20:36:36.083514] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.241 [2024-05-16 20:36:36.083546] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.241 [2024-05-16 20:36:36.083560] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.241 [2024-05-16 20:36:36.083566] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.241 [2024-05-16 20:36:36.083572] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:23.241 [2024-05-16 20:36:36.093752] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.241 qpair failed and we were unable to recover it. 00:27:23.241 [2024-05-16 20:36:36.103587] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.241 [2024-05-16 20:36:36.103625] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.241 [2024-05-16 20:36:36.103639] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.241 [2024-05-16 20:36:36.103646] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.241 [2024-05-16 20:36:36.103651] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:23.241 [2024-05-16 20:36:36.113934] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.241 qpair failed and we were unable to recover it. 00:27:23.241 [2024-05-16 20:36:36.123621] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.241 [2024-05-16 20:36:36.123658] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.241 [2024-05-16 20:36:36.123672] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.241 [2024-05-16 20:36:36.123678] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.241 [2024-05-16 20:36:36.123684] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:23.241 [2024-05-16 20:36:36.134091] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.241 qpair failed and we were unable to recover it. 00:27:23.241 [2024-05-16 20:36:36.143642] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.241 [2024-05-16 20:36:36.143675] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.241 [2024-05-16 20:36:36.143689] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.241 [2024-05-16 20:36:36.143700] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.241 [2024-05-16 20:36:36.143706] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:23.241 [2024-05-16 20:36:36.153896] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.241 qpair failed and we were unable to recover it. 00:27:23.241 [2024-05-16 20:36:36.163735] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.241 [2024-05-16 20:36:36.163774] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.241 [2024-05-16 20:36:36.163788] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.241 [2024-05-16 20:36:36.163794] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.241 [2024-05-16 20:36:36.163801] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:23.241 [2024-05-16 20:36:36.174105] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.241 qpair failed and we were unable to recover it. 00:27:23.241 [2024-05-16 20:36:36.183728] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.241 [2024-05-16 20:36:36.183766] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.241 [2024-05-16 20:36:36.183780] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.241 [2024-05-16 20:36:36.183787] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.241 [2024-05-16 20:36:36.183792] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:23.241 [2024-05-16 20:36:36.194312] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.241 qpair failed and we were unable to recover it. 00:27:23.241 [2024-05-16 20:36:36.203763] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.241 [2024-05-16 20:36:36.203801] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.241 [2024-05-16 20:36:36.203816] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.241 [2024-05-16 20:36:36.203822] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.241 [2024-05-16 20:36:36.203828] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:23.241 [2024-05-16 20:36:36.214267] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.241 qpair failed and we were unable to recover it. 00:27:23.241 [2024-05-16 20:36:36.223957] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.241 [2024-05-16 20:36:36.223994] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.241 [2024-05-16 20:36:36.224009] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.241 [2024-05-16 20:36:36.224015] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.241 [2024-05-16 20:36:36.224021] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:23.501 [2024-05-16 20:36:36.234314] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.501 qpair failed and we were unable to recover it. 00:27:23.501 [2024-05-16 20:36:36.243978] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.501 [2024-05-16 20:36:36.244008] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.501 [2024-05-16 20:36:36.244027] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.501 [2024-05-16 20:36:36.244034] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.501 [2024-05-16 20:36:36.244040] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:23.501 [2024-05-16 20:36:36.254391] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.501 qpair failed and we were unable to recover it. 00:27:23.501 [2024-05-16 20:36:36.263912] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.501 [2024-05-16 20:36:36.263950] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.501 [2024-05-16 20:36:36.263965] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.501 [2024-05-16 20:36:36.263972] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.501 [2024-05-16 20:36:36.263977] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:23.501 [2024-05-16 20:36:36.274330] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.501 qpair failed and we were unable to recover it. 00:27:23.501 [2024-05-16 20:36:36.284106] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.501 [2024-05-16 20:36:36.284145] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.501 [2024-05-16 20:36:36.284159] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.501 [2024-05-16 20:36:36.284165] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.501 [2024-05-16 20:36:36.284171] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:23.501 [2024-05-16 20:36:36.294514] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.501 qpair failed and we were unable to recover it. 00:27:23.501 [2024-05-16 20:36:36.304080] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.502 [2024-05-16 20:36:36.304117] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.502 [2024-05-16 20:36:36.304131] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.502 [2024-05-16 20:36:36.304138] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.502 [2024-05-16 20:36:36.304144] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:23.502 [2024-05-16 20:36:36.314498] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.502 qpair failed and we were unable to recover it. 00:27:23.502 [2024-05-16 20:36:36.324172] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.502 [2024-05-16 20:36:36.324207] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.502 [2024-05-16 20:36:36.324224] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.502 [2024-05-16 20:36:36.324231] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.502 [2024-05-16 20:36:36.324236] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:23.502 [2024-05-16 20:36:36.334566] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.502 qpair failed and we were unable to recover it. 00:27:23.502 [2024-05-16 20:36:36.344229] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.502 [2024-05-16 20:36:36.344267] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.502 [2024-05-16 20:36:36.344281] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.502 [2024-05-16 20:36:36.344287] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.502 [2024-05-16 20:36:36.344293] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:23.502 [2024-05-16 20:36:36.354730] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.502 qpair failed and we were unable to recover it. 00:27:23.502 [2024-05-16 20:36:36.364319] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.502 [2024-05-16 20:36:36.364362] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.502 [2024-05-16 20:36:36.364376] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.502 [2024-05-16 20:36:36.364383] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.502 [2024-05-16 20:36:36.364389] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:23.502 [2024-05-16 20:36:36.374577] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.502 qpair failed and we were unable to recover it. 00:27:23.502 [2024-05-16 20:36:36.384339] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.502 [2024-05-16 20:36:36.384377] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.502 [2024-05-16 20:36:36.384391] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.502 [2024-05-16 20:36:36.384397] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.502 [2024-05-16 20:36:36.384403] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:23.502 [2024-05-16 20:36:36.394802] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.502 qpair failed and we were unable to recover it. 00:27:23.502 [2024-05-16 20:36:36.404349] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.502 [2024-05-16 20:36:36.404383] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.502 [2024-05-16 20:36:36.404398] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.502 [2024-05-16 20:36:36.404404] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.502 [2024-05-16 20:36:36.404413] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:23.502 [2024-05-16 20:36:36.414878] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.502 qpair failed and we were unable to recover it. 00:27:23.502 [2024-05-16 20:36:36.424450] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.502 [2024-05-16 20:36:36.424487] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.502 [2024-05-16 20:36:36.424501] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.502 [2024-05-16 20:36:36.424507] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.502 [2024-05-16 20:36:36.424513] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:23.502 [2024-05-16 20:36:36.434994] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.502 qpair failed and we were unable to recover it. 00:27:23.502 [2024-05-16 20:36:36.444544] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.502 [2024-05-16 20:36:36.444582] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.502 [2024-05-16 20:36:36.444596] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.502 [2024-05-16 20:36:36.444602] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.502 [2024-05-16 20:36:36.444608] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:23.502 [2024-05-16 20:36:36.454949] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.502 qpair failed and we were unable to recover it. 00:27:23.502 [2024-05-16 20:36:36.464599] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.502 [2024-05-16 20:36:36.464642] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.502 [2024-05-16 20:36:36.464656] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.502 [2024-05-16 20:36:36.464662] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.502 [2024-05-16 20:36:36.464668] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:23.502 [2024-05-16 20:36:36.474955] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.502 qpair failed and we were unable to recover it. 00:27:23.502 [2024-05-16 20:36:36.484673] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.502 [2024-05-16 20:36:36.484712] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.502 [2024-05-16 20:36:36.484726] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.502 [2024-05-16 20:36:36.484732] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.502 [2024-05-16 20:36:36.484738] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:23.760 [2024-05-16 20:36:36.495075] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.760 qpair failed and we were unable to recover it. 00:27:23.760 [2024-05-16 20:36:36.504713] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.760 [2024-05-16 20:36:36.504752] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.760 [2024-05-16 20:36:36.504771] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.760 [2024-05-16 20:36:36.504778] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.760 [2024-05-16 20:36:36.504785] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:23.760 [2024-05-16 20:36:36.515061] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.760 qpair failed and we were unable to recover it. 00:27:23.760 [2024-05-16 20:36:36.524849] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.760 [2024-05-16 20:36:36.524887] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.760 [2024-05-16 20:36:36.524902] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.760 [2024-05-16 20:36:36.524909] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.760 [2024-05-16 20:36:36.524915] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:23.760 [2024-05-16 20:36:36.535198] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.760 qpair failed and we were unable to recover it. 00:27:23.760 [2024-05-16 20:36:36.544868] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.760 [2024-05-16 20:36:36.544904] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.760 [2024-05-16 20:36:36.544918] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.760 [2024-05-16 20:36:36.544925] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.760 [2024-05-16 20:36:36.544931] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:23.760 [2024-05-16 20:36:36.555256] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.760 qpair failed and we were unable to recover it. 00:27:23.760 [2024-05-16 20:36:36.564916] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.760 [2024-05-16 20:36:36.564947] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.760 [2024-05-16 20:36:36.564961] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.760 [2024-05-16 20:36:36.564968] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.760 [2024-05-16 20:36:36.564974] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:23.760 [2024-05-16 20:36:36.575376] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.760 qpair failed and we were unable to recover it. 00:27:23.760 [2024-05-16 20:36:36.584935] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.760 [2024-05-16 20:36:36.584973] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.760 [2024-05-16 20:36:36.584987] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.760 [2024-05-16 20:36:36.584997] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.760 [2024-05-16 20:36:36.585002] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:23.760 [2024-05-16 20:36:36.595646] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.760 qpair failed and we were unable to recover it. 00:27:23.760 [2024-05-16 20:36:36.605140] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.760 [2024-05-16 20:36:36.605181] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.760 [2024-05-16 20:36:36.605196] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.760 [2024-05-16 20:36:36.605202] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.760 [2024-05-16 20:36:36.605208] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:23.760 [2024-05-16 20:36:36.615507] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.760 qpair failed and we were unable to recover it. 00:27:23.760 [2024-05-16 20:36:36.625200] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.760 [2024-05-16 20:36:36.625239] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.760 [2024-05-16 20:36:36.625254] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.760 [2024-05-16 20:36:36.625261] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.760 [2024-05-16 20:36:36.625268] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:23.760 [2024-05-16 20:36:36.635807] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.760 qpair failed and we were unable to recover it. 00:27:23.760 [2024-05-16 20:36:36.645286] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.760 [2024-05-16 20:36:36.645327] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.760 [2024-05-16 20:36:36.645341] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.760 [2024-05-16 20:36:36.645348] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.760 [2024-05-16 20:36:36.645353] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:23.760 [2024-05-16 20:36:36.655528] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.760 qpair failed and we were unable to recover it. 00:27:23.760 [2024-05-16 20:36:36.665228] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.761 [2024-05-16 20:36:36.665270] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.761 [2024-05-16 20:36:36.665283] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.761 [2024-05-16 20:36:36.665289] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.761 [2024-05-16 20:36:36.665295] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:23.761 [2024-05-16 20:36:36.675728] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.761 qpair failed and we were unable to recover it. 00:27:23.761 [2024-05-16 20:36:36.685326] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.761 [2024-05-16 20:36:36.685370] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.761 [2024-05-16 20:36:36.685384] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.761 [2024-05-16 20:36:36.685391] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.761 [2024-05-16 20:36:36.685396] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:23.761 [2024-05-16 20:36:36.695723] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.761 qpair failed and we were unable to recover it. 00:27:23.761 [2024-05-16 20:36:36.705375] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.761 [2024-05-16 20:36:36.705412] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.761 [2024-05-16 20:36:36.705433] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.761 [2024-05-16 20:36:36.705440] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.761 [2024-05-16 20:36:36.705446] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:23.761 [2024-05-16 20:36:36.716027] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.761 qpair failed and we were unable to recover it. 00:27:23.761 [2024-05-16 20:36:36.725406] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.761 [2024-05-16 20:36:36.725447] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.761 [2024-05-16 20:36:36.725464] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.761 [2024-05-16 20:36:36.725471] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.761 [2024-05-16 20:36:36.725476] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:23.761 [2024-05-16 20:36:36.735843] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.761 qpair failed and we were unable to recover it. 00:27:23.761 [2024-05-16 20:36:36.745512] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.761 [2024-05-16 20:36:36.745550] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.761 [2024-05-16 20:36:36.745564] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.761 [2024-05-16 20:36:36.745570] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.761 [2024-05-16 20:36:36.745576] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:24.019 [2024-05-16 20:36:36.756049] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.019 qpair failed and we were unable to recover it. 00:27:24.019 [2024-05-16 20:36:36.765573] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.019 [2024-05-16 20:36:36.765616] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.019 [2024-05-16 20:36:36.765635] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.019 [2024-05-16 20:36:36.765643] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.019 [2024-05-16 20:36:36.765649] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:24.019 [2024-05-16 20:36:36.775700] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.019 qpair failed and we were unable to recover it. 00:27:24.019 [2024-05-16 20:36:36.785581] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.019 [2024-05-16 20:36:36.785619] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.019 [2024-05-16 20:36:36.785633] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.019 [2024-05-16 20:36:36.785640] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.019 [2024-05-16 20:36:36.785646] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:24.019 [2024-05-16 20:36:36.796065] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.019 qpair failed and we were unable to recover it. 00:27:24.019 [2024-05-16 20:36:36.805516] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.019 [2024-05-16 20:36:36.805548] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.019 [2024-05-16 20:36:36.805563] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.019 [2024-05-16 20:36:36.805570] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.019 [2024-05-16 20:36:36.805575] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:24.019 [2024-05-16 20:36:36.816253] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.019 qpair failed and we were unable to recover it. 00:27:24.019 [2024-05-16 20:36:36.825795] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.019 [2024-05-16 20:36:36.825833] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.019 [2024-05-16 20:36:36.825847] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.019 [2024-05-16 20:36:36.825854] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.019 [2024-05-16 20:36:36.825859] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:24.019 [2024-05-16 20:36:36.836145] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.019 qpair failed and we were unable to recover it. 00:27:24.019 [2024-05-16 20:36:36.845625] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.019 [2024-05-16 20:36:36.845666] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.019 [2024-05-16 20:36:36.845681] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.019 [2024-05-16 20:36:36.845687] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.019 [2024-05-16 20:36:36.845696] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:24.019 [2024-05-16 20:36:36.856012] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.019 qpair failed and we were unable to recover it. 00:27:24.019 [2024-05-16 20:36:36.865897] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.019 [2024-05-16 20:36:36.865938] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.019 [2024-05-16 20:36:36.865952] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.019 [2024-05-16 20:36:36.865959] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.019 [2024-05-16 20:36:36.865965] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:24.019 [2024-05-16 20:36:36.876072] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.019 qpair failed and we were unable to recover it. 00:27:24.019 [2024-05-16 20:36:36.885844] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.019 [2024-05-16 20:36:36.885882] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.019 [2024-05-16 20:36:36.885896] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.019 [2024-05-16 20:36:36.885903] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.019 [2024-05-16 20:36:36.885908] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:24.019 [2024-05-16 20:36:36.896347] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.019 qpair failed and we were unable to recover it. 00:27:24.019 [2024-05-16 20:36:36.905975] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.019 [2024-05-16 20:36:36.906012] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.019 [2024-05-16 20:36:36.906027] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.019 [2024-05-16 20:36:36.906033] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.019 [2024-05-16 20:36:36.906039] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:24.019 [2024-05-16 20:36:36.916403] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.019 qpair failed and we were unable to recover it. 00:27:24.019 [2024-05-16 20:36:36.925864] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.019 [2024-05-16 20:36:36.925902] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.019 [2024-05-16 20:36:36.925916] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.019 [2024-05-16 20:36:36.925922] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.019 [2024-05-16 20:36:36.925928] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:24.019 [2024-05-16 20:36:36.936461] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.019 qpair failed and we were unable to recover it. 00:27:24.019 [2024-05-16 20:36:36.946096] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.019 [2024-05-16 20:36:36.946134] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.019 [2024-05-16 20:36:36.946148] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.019 [2024-05-16 20:36:36.946154] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.019 [2024-05-16 20:36:36.946159] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:24.019 [2024-05-16 20:36:36.956575] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.019 qpair failed and we were unable to recover it. 00:27:24.019 [2024-05-16 20:36:36.966171] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.019 [2024-05-16 20:36:36.966208] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.019 [2024-05-16 20:36:36.966222] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.019 [2024-05-16 20:36:36.966229] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.019 [2024-05-16 20:36:36.966235] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:24.019 [2024-05-16 20:36:36.976635] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.019 qpair failed and we were unable to recover it. 00:27:24.019 [2024-05-16 20:36:36.986249] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.019 [2024-05-16 20:36:36.986287] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.019 [2024-05-16 20:36:36.986302] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.019 [2024-05-16 20:36:36.986309] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.019 [2024-05-16 20:36:36.986316] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:24.019 [2024-05-16 20:36:36.996746] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.019 qpair failed and we were unable to recover it. 00:27:24.019 [2024-05-16 20:36:37.006306] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.019 [2024-05-16 20:36:37.006342] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.019 [2024-05-16 20:36:37.006358] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.020 [2024-05-16 20:36:37.006367] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.020 [2024-05-16 20:36:37.006375] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:24.277 [2024-05-16 20:36:37.016700] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.277 qpair failed and we were unable to recover it. 00:27:24.277 [2024-05-16 20:36:37.026346] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.277 [2024-05-16 20:36:37.026381] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.277 [2024-05-16 20:36:37.026397] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.277 [2024-05-16 20:36:37.026407] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.277 [2024-05-16 20:36:37.026412] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:24.277 [2024-05-16 20:36:37.036798] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.277 qpair failed and we were unable to recover it. 00:27:24.277 [2024-05-16 20:36:37.046365] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.277 [2024-05-16 20:36:37.046404] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.277 [2024-05-16 20:36:37.046418] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.277 [2024-05-16 20:36:37.046432] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.277 [2024-05-16 20:36:37.046438] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:24.277 [2024-05-16 20:36:37.056836] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.277 qpair failed and we were unable to recover it. 00:27:24.277 [2024-05-16 20:36:37.066439] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.277 [2024-05-16 20:36:37.066477] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.277 [2024-05-16 20:36:37.066491] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.277 [2024-05-16 20:36:37.066498] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.277 [2024-05-16 20:36:37.066504] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:24.277 [2024-05-16 20:36:37.076900] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.277 qpair failed and we were unable to recover it. 00:27:24.277 [2024-05-16 20:36:37.086544] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.277 [2024-05-16 20:36:37.086585] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.277 [2024-05-16 20:36:37.086598] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.277 [2024-05-16 20:36:37.086605] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.277 [2024-05-16 20:36:37.086610] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:24.277 [2024-05-16 20:36:37.097023] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.277 qpair failed and we were unable to recover it. 00:27:24.277 [2024-05-16 20:36:37.106746] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.277 [2024-05-16 20:36:37.106782] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.277 [2024-05-16 20:36:37.106797] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.277 [2024-05-16 20:36:37.106803] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.277 [2024-05-16 20:36:37.106809] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:24.277 [2024-05-16 20:36:37.117006] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.277 qpair failed and we were unable to recover it. 00:27:24.277 [2024-05-16 20:36:37.126690] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.277 [2024-05-16 20:36:37.126727] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.277 [2024-05-16 20:36:37.126741] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.277 [2024-05-16 20:36:37.126748] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.277 [2024-05-16 20:36:37.126753] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:24.277 [2024-05-16 20:36:37.137128] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.277 qpair failed and we were unable to recover it. 00:27:24.277 [2024-05-16 20:36:37.146788] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.277 [2024-05-16 20:36:37.146824] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.277 [2024-05-16 20:36:37.146838] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.278 [2024-05-16 20:36:37.146845] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.278 [2024-05-16 20:36:37.146850] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:24.278 [2024-05-16 20:36:37.157220] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.278 qpair failed and we were unable to recover it. 00:27:24.278 [2024-05-16 20:36:37.166798] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.278 [2024-05-16 20:36:37.166835] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.278 [2024-05-16 20:36:37.166849] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.278 [2024-05-16 20:36:37.166855] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.278 [2024-05-16 20:36:37.166861] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:24.278 [2024-05-16 20:36:37.177324] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.278 qpair failed and we were unable to recover it. 00:27:24.278 [2024-05-16 20:36:37.186810] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.278 [2024-05-16 20:36:37.186846] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.278 [2024-05-16 20:36:37.186860] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.278 [2024-05-16 20:36:37.186866] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.278 [2024-05-16 20:36:37.186872] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:24.278 [2024-05-16 20:36:37.197606] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.278 qpair failed and we were unable to recover it. 00:27:24.278 [2024-05-16 20:36:37.206851] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.278 [2024-05-16 20:36:37.206883] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.278 [2024-05-16 20:36:37.206901] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.278 [2024-05-16 20:36:37.206908] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.278 [2024-05-16 20:36:37.206914] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:24.278 [2024-05-16 20:36:37.217399] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.278 qpair failed and we were unable to recover it. 00:27:24.278 [2024-05-16 20:36:37.226900] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.278 [2024-05-16 20:36:37.226937] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.278 [2024-05-16 20:36:37.226951] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.278 [2024-05-16 20:36:37.226957] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.278 [2024-05-16 20:36:37.226963] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:24.278 [2024-05-16 20:36:37.237428] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.278 qpair failed and we were unable to recover it. 00:27:24.278 [2024-05-16 20:36:37.246965] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.278 [2024-05-16 20:36:37.247006] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.278 [2024-05-16 20:36:37.247020] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.278 [2024-05-16 20:36:37.247027] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.278 [2024-05-16 20:36:37.247032] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:24.278 [2024-05-16 20:36:37.257366] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.278 qpair failed and we were unable to recover it. 00:27:24.278 [2024-05-16 20:36:37.267004] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.278 [2024-05-16 20:36:37.267039] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.278 [2024-05-16 20:36:37.267074] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.278 [2024-05-16 20:36:37.267082] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.278 [2024-05-16 20:36:37.267088] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:24.536 [2024-05-16 20:36:37.277605] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.536 qpair failed and we were unable to recover it. 00:27:24.536 [2024-05-16 20:36:37.287174] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.536 [2024-05-16 20:36:37.287214] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.536 [2024-05-16 20:36:37.287228] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.536 [2024-05-16 20:36:37.287235] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.536 [2024-05-16 20:36:37.287244] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:24.536 [2024-05-16 20:36:37.297490] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.536 qpair failed and we were unable to recover it. 00:27:24.536 [2024-05-16 20:36:37.307108] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.536 [2024-05-16 20:36:37.307149] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.537 [2024-05-16 20:36:37.307164] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.537 [2024-05-16 20:36:37.307170] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.537 [2024-05-16 20:36:37.307176] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:24.537 [2024-05-16 20:36:37.317922] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.537 qpair failed and we were unable to recover it. 00:27:24.537 [2024-05-16 20:36:37.327274] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.537 [2024-05-16 20:36:37.327314] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.537 [2024-05-16 20:36:37.327328] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.537 [2024-05-16 20:36:37.327334] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.537 [2024-05-16 20:36:37.327339] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:24.537 [2024-05-16 20:36:37.337774] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.537 qpair failed and we were unable to recover it. 00:27:24.537 [2024-05-16 20:36:37.347230] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.537 [2024-05-16 20:36:37.347271] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.537 [2024-05-16 20:36:37.347285] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.537 [2024-05-16 20:36:37.347292] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.537 [2024-05-16 20:36:37.347298] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:24.537 [2024-05-16 20:36:37.357883] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.537 qpair failed and we were unable to recover it. 00:27:24.537 [2024-05-16 20:36:37.367493] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.537 [2024-05-16 20:36:37.367529] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.537 [2024-05-16 20:36:37.367543] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.537 [2024-05-16 20:36:37.367550] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.537 [2024-05-16 20:36:37.367555] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:24.537 [2024-05-16 20:36:37.377618] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.537 qpair failed and we were unable to recover it. 00:27:24.537 [2024-05-16 20:36:37.387293] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.537 [2024-05-16 20:36:37.387331] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.537 [2024-05-16 20:36:37.387344] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.537 [2024-05-16 20:36:37.387350] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.537 [2024-05-16 20:36:37.387356] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:24.537 [2024-05-16 20:36:37.398202] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.537 qpair failed and we were unable to recover it. 00:27:24.537 [2024-05-16 20:36:37.407556] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.537 [2024-05-16 20:36:37.407594] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.537 [2024-05-16 20:36:37.407609] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.537 [2024-05-16 20:36:37.407616] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.537 [2024-05-16 20:36:37.407621] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:24.537 [2024-05-16 20:36:37.417910] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.537 qpair failed and we were unable to recover it. 00:27:24.537 [2024-05-16 20:36:37.427818] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.537 [2024-05-16 20:36:37.427857] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.537 [2024-05-16 20:36:37.427870] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.537 [2024-05-16 20:36:37.427876] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.537 [2024-05-16 20:36:37.427882] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:24.537 [2024-05-16 20:36:37.438065] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.537 qpair failed and we were unable to recover it. 00:27:24.537 [2024-05-16 20:36:37.447772] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.537 [2024-05-16 20:36:37.447805] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.537 [2024-05-16 20:36:37.447820] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.537 [2024-05-16 20:36:37.447826] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.537 [2024-05-16 20:36:37.447832] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:24.537 [2024-05-16 20:36:37.458200] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.537 qpair failed and we were unable to recover it. 00:27:24.537 [2024-05-16 20:36:37.467671] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.537 [2024-05-16 20:36:37.467709] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.537 [2024-05-16 20:36:37.467722] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.537 [2024-05-16 20:36:37.467734] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.537 [2024-05-16 20:36:37.467740] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:24.537 [2024-05-16 20:36:37.477991] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.537 qpair failed and we were unable to recover it. 00:27:24.537 [2024-05-16 20:36:37.487544] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.537 [2024-05-16 20:36:37.487588] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.537 [2024-05-16 20:36:37.487601] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.537 [2024-05-16 20:36:37.487607] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.537 [2024-05-16 20:36:37.487613] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:24.537 [2024-05-16 20:36:37.498246] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.537 qpair failed and we were unable to recover it. 00:27:24.537 [2024-05-16 20:36:37.507790] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.537 [2024-05-16 20:36:37.507830] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.537 [2024-05-16 20:36:37.507846] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.537 [2024-05-16 20:36:37.507853] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.537 [2024-05-16 20:36:37.507858] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:24.537 [2024-05-16 20:36:37.518089] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.537 qpair failed and we were unable to recover it. 00:27:24.537 [2024-05-16 20:36:37.527784] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.537 [2024-05-16 20:36:37.527815] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.537 [2024-05-16 20:36:37.527833] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.537 [2024-05-16 20:36:37.527840] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.537 [2024-05-16 20:36:37.527846] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:24.796 [2024-05-16 20:36:37.538261] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.796 qpair failed and we were unable to recover it. 00:27:24.796 [2024-05-16 20:36:37.548101] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.796 [2024-05-16 20:36:37.548137] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.796 [2024-05-16 20:36:37.548152] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.796 [2024-05-16 20:36:37.548159] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.796 [2024-05-16 20:36:37.548165] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:24.796 [2024-05-16 20:36:37.558381] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.796 qpair failed and we were unable to recover it. 00:27:24.796 [2024-05-16 20:36:37.568005] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.796 [2024-05-16 20:36:37.568048] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.796 [2024-05-16 20:36:37.568062] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.796 [2024-05-16 20:36:37.568069] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.796 [2024-05-16 20:36:37.568074] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:24.796 [2024-05-16 20:36:37.578385] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.796 qpair failed and we were unable to recover it. 00:27:24.796 [2024-05-16 20:36:37.588074] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.796 [2024-05-16 20:36:37.588114] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.796 [2024-05-16 20:36:37.588128] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.796 [2024-05-16 20:36:37.588135] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.796 [2024-05-16 20:36:37.588140] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:24.796 [2024-05-16 20:36:37.598547] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.796 qpair failed and we were unable to recover it. 00:27:24.796 [2024-05-16 20:36:37.608221] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.796 [2024-05-16 20:36:37.608256] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.796 [2024-05-16 20:36:37.608271] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.796 [2024-05-16 20:36:37.608278] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.796 [2024-05-16 20:36:37.608284] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:24.796 [2024-05-16 20:36:37.618529] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.796 qpair failed and we were unable to recover it. 00:27:24.796 [2024-05-16 20:36:37.628169] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.796 [2024-05-16 20:36:37.628206] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.796 [2024-05-16 20:36:37.628219] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.796 [2024-05-16 20:36:37.628226] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.796 [2024-05-16 20:36:37.628231] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:24.796 [2024-05-16 20:36:37.638784] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.796 qpair failed and we were unable to recover it. 00:27:24.796 [2024-05-16 20:36:37.648261] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.796 [2024-05-16 20:36:37.648302] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.796 [2024-05-16 20:36:37.648320] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.796 [2024-05-16 20:36:37.648326] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.796 [2024-05-16 20:36:37.648332] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:24.796 [2024-05-16 20:36:37.658732] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.796 qpair failed and we were unable to recover it. 00:27:24.796 [2024-05-16 20:36:37.668367] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.796 [2024-05-16 20:36:37.668401] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.796 [2024-05-16 20:36:37.668415] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.796 [2024-05-16 20:36:37.668426] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.796 [2024-05-16 20:36:37.668432] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:24.796 [2024-05-16 20:36:37.678554] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.796 qpair failed and we were unable to recover it. 00:27:24.796 [2024-05-16 20:36:37.688311] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.796 [2024-05-16 20:36:37.688344] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.796 [2024-05-16 20:36:37.688358] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.796 [2024-05-16 20:36:37.688364] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.796 [2024-05-16 20:36:37.688370] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:24.796 [2024-05-16 20:36:37.698664] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.796 qpair failed and we were unable to recover it. 00:27:24.796 [2024-05-16 20:36:37.708333] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.796 [2024-05-16 20:36:37.708369] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.796 [2024-05-16 20:36:37.708383] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.796 [2024-05-16 20:36:37.708390] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.796 [2024-05-16 20:36:37.708395] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:24.796 [2024-05-16 20:36:37.718693] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.796 qpair failed and we were unable to recover it. 00:27:24.796 [2024-05-16 20:36:37.728413] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.796 [2024-05-16 20:36:37.728459] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.796 [2024-05-16 20:36:37.728473] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.796 [2024-05-16 20:36:37.728479] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.796 [2024-05-16 20:36:37.728488] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:24.796 [2024-05-16 20:36:37.738929] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.796 qpair failed and we were unable to recover it. 00:27:24.796 [2024-05-16 20:36:37.748540] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.796 [2024-05-16 20:36:37.748579] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.796 [2024-05-16 20:36:37.748593] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.796 [2024-05-16 20:36:37.748599] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.796 [2024-05-16 20:36:37.748605] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:24.796 [2024-05-16 20:36:37.758789] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.797 qpair failed and we were unable to recover it. 00:27:24.797 [2024-05-16 20:36:37.768558] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.797 [2024-05-16 20:36:37.768598] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.797 [2024-05-16 20:36:37.768612] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.797 [2024-05-16 20:36:37.768619] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.797 [2024-05-16 20:36:37.768625] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:24.797 [2024-05-16 20:36:37.778835] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.797 qpair failed and we were unable to recover it. 00:27:25.055 [2024-05-16 20:36:37.788565] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.055 [2024-05-16 20:36:37.788602] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.055 [2024-05-16 20:36:37.788620] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.055 [2024-05-16 20:36:37.788627] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.055 [2024-05-16 20:36:37.788633] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:25.055 [2024-05-16 20:36:37.798886] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:25.055 qpair failed and we were unable to recover it. 00:27:25.055 [2024-05-16 20:36:37.808614] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.055 [2024-05-16 20:36:37.808649] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.055 [2024-05-16 20:36:37.808664] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.055 [2024-05-16 20:36:37.808671] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.055 [2024-05-16 20:36:37.808677] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:25.055 [2024-05-16 20:36:37.818960] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:25.055 qpair failed and we were unable to recover it. 00:27:25.055 [2024-05-16 20:36:37.828704] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.055 [2024-05-16 20:36:37.828740] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.055 [2024-05-16 20:36:37.828754] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.055 [2024-05-16 20:36:37.828760] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.055 [2024-05-16 20:36:37.828766] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:25.055 [2024-05-16 20:36:37.839010] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:25.055 qpair failed and we were unable to recover it. 00:27:25.055 [2024-05-16 20:36:37.848706] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.055 [2024-05-16 20:36:37.848742] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.055 [2024-05-16 20:36:37.848757] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.055 [2024-05-16 20:36:37.848764] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.055 [2024-05-16 20:36:37.848769] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:25.055 [2024-05-16 20:36:37.859053] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:25.055 qpair failed and we were unable to recover it. 00:27:25.055 [2024-05-16 20:36:37.868802] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.055 [2024-05-16 20:36:37.868839] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.055 [2024-05-16 20:36:37.868852] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.055 [2024-05-16 20:36:37.868859] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.055 [2024-05-16 20:36:37.868865] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:25.055 [2024-05-16 20:36:37.879209] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:25.055 qpair failed and we were unable to recover it. 00:27:25.055 [2024-05-16 20:36:37.888860] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.055 [2024-05-16 20:36:37.888895] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.055 [2024-05-16 20:36:37.888909] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.055 [2024-05-16 20:36:37.888915] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.055 [2024-05-16 20:36:37.888921] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:25.055 [2024-05-16 20:36:37.899237] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:25.055 qpair failed and we were unable to recover it. 00:27:25.055 [2024-05-16 20:36:37.908939] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.055 [2024-05-16 20:36:37.908977] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.055 [2024-05-16 20:36:37.908991] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.055 [2024-05-16 20:36:37.909000] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.055 [2024-05-16 20:36:37.909006] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:25.055 [2024-05-16 20:36:37.919292] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:25.055 qpair failed and we were unable to recover it. 00:27:25.055 [2024-05-16 20:36:37.929012] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.055 [2024-05-16 20:36:37.929050] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.055 [2024-05-16 20:36:37.929063] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.055 [2024-05-16 20:36:37.929070] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.055 [2024-05-16 20:36:37.929076] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:25.055 [2024-05-16 20:36:37.939405] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:25.055 qpair failed and we were unable to recover it. 00:27:25.055 [2024-05-16 20:36:37.949097] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.055 [2024-05-16 20:36:37.949136] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.056 [2024-05-16 20:36:37.949150] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.056 [2024-05-16 20:36:37.949156] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.056 [2024-05-16 20:36:37.949162] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:25.056 [2024-05-16 20:36:37.959433] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:25.056 qpair failed and we were unable to recover it. 00:27:25.056 [2024-05-16 20:36:37.969025] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.056 [2024-05-16 20:36:37.969062] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.056 [2024-05-16 20:36:37.969076] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.056 [2024-05-16 20:36:37.969082] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.056 [2024-05-16 20:36:37.969088] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:25.056 [2024-05-16 20:36:37.979484] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:25.056 qpair failed and we were unable to recover it. 00:27:25.056 [2024-05-16 20:36:37.989174] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.056 [2024-05-16 20:36:37.989209] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.056 [2024-05-16 20:36:37.989223] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.056 [2024-05-16 20:36:37.989229] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.056 [2024-05-16 20:36:37.989235] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:25.056 [2024-05-16 20:36:37.999536] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:25.056 qpair failed and we were unable to recover it. 00:27:25.056 [2024-05-16 20:36:38.009098] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.056 [2024-05-16 20:36:38.009134] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.056 [2024-05-16 20:36:38.009150] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.056 [2024-05-16 20:36:38.009157] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.056 [2024-05-16 20:36:38.009162] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:25.056 [2024-05-16 20:36:38.019540] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:25.056 qpair failed and we were unable to recover it. 00:27:25.056 [2024-05-16 20:36:38.029261] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.056 [2024-05-16 20:36:38.029301] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.056 [2024-05-16 20:36:38.029315] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.056 [2024-05-16 20:36:38.029322] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.056 [2024-05-16 20:36:38.029327] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:25.056 [2024-05-16 20:36:38.039612] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:25.056 qpair failed and we were unable to recover it. 00:27:25.314 [2024-05-16 20:36:38.049312] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.314 [2024-05-16 20:36:38.049349] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.314 [2024-05-16 20:36:38.049368] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.314 [2024-05-16 20:36:38.049375] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.314 [2024-05-16 20:36:38.049381] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:25.314 [2024-05-16 20:36:38.059701] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:25.314 qpair failed and we were unable to recover it. 00:27:25.314 [2024-05-16 20:36:38.069309] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.314 [2024-05-16 20:36:38.069347] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.314 [2024-05-16 20:36:38.069363] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.314 [2024-05-16 20:36:38.069369] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.314 [2024-05-16 20:36:38.069375] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:25.314 [2024-05-16 20:36:38.079818] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:25.314 qpair failed and we were unable to recover it. 00:27:25.314 [2024-05-16 20:36:38.089453] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.314 [2024-05-16 20:36:38.089484] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.314 [2024-05-16 20:36:38.089501] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.314 [2024-05-16 20:36:38.089507] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.314 [2024-05-16 20:36:38.089513] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:25.314 [2024-05-16 20:36:38.099778] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:25.314 qpair failed and we were unable to recover it. 00:27:25.314 [2024-05-16 20:36:38.109500] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.314 [2024-05-16 20:36:38.109539] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.314 [2024-05-16 20:36:38.109554] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.314 [2024-05-16 20:36:38.109560] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.314 [2024-05-16 20:36:38.109565] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:25.314 [2024-05-16 20:36:38.119750] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:25.314 qpair failed and we were unable to recover it. 00:27:25.314 [2024-05-16 20:36:38.129626] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.314 [2024-05-16 20:36:38.129662] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.314 [2024-05-16 20:36:38.129676] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.315 [2024-05-16 20:36:38.129682] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.315 [2024-05-16 20:36:38.129688] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:25.315 [2024-05-16 20:36:38.139871] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:25.315 qpair failed and we were unable to recover it. 00:27:25.315 [2024-05-16 20:36:38.149625] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.315 [2024-05-16 20:36:38.149664] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.315 [2024-05-16 20:36:38.149680] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.315 [2024-05-16 20:36:38.149686] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.315 [2024-05-16 20:36:38.149692] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:25.315 [2024-05-16 20:36:38.159962] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:25.315 qpair failed and we were unable to recover it. 00:27:25.315 [2024-05-16 20:36:38.169643] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.315 [2024-05-16 20:36:38.169682] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.315 [2024-05-16 20:36:38.169697] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.315 [2024-05-16 20:36:38.169703] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.315 [2024-05-16 20:36:38.169712] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:25.315 [2024-05-16 20:36:38.180201] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:25.315 qpair failed and we were unable to recover it. 00:27:25.315 [2024-05-16 20:36:38.189662] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.315 [2024-05-16 20:36:38.189700] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.315 [2024-05-16 20:36:38.189714] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.315 [2024-05-16 20:36:38.189720] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.315 [2024-05-16 20:36:38.189726] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:25.315 [2024-05-16 20:36:38.200120] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:25.315 qpair failed and we were unable to recover it. 00:27:25.315 [2024-05-16 20:36:38.209756] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.315 [2024-05-16 20:36:38.209793] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.315 [2024-05-16 20:36:38.209807] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.315 [2024-05-16 20:36:38.209814] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.315 [2024-05-16 20:36:38.209819] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:25.315 [2024-05-16 20:36:38.220412] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:25.315 qpair failed and we were unable to recover it. 00:27:25.315 [2024-05-16 20:36:38.229771] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.315 [2024-05-16 20:36:38.229809] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.315 [2024-05-16 20:36:38.229822] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.315 [2024-05-16 20:36:38.229829] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.315 [2024-05-16 20:36:38.229834] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:25.315 [2024-05-16 20:36:38.240097] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:25.315 qpair failed and we were unable to recover it. 00:27:25.315 [2024-05-16 20:36:38.249834] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.315 [2024-05-16 20:36:38.249875] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.315 [2024-05-16 20:36:38.249889] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.315 [2024-05-16 20:36:38.249895] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.315 [2024-05-16 20:36:38.249901] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:25.315 [2024-05-16 20:36:38.260286] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:25.315 qpair failed and we were unable to recover it. 00:27:25.315 [2024-05-16 20:36:38.269974] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.315 [2024-05-16 20:36:38.270010] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.315 [2024-05-16 20:36:38.270025] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.315 [2024-05-16 20:36:38.270032] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.315 [2024-05-16 20:36:38.270037] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:25.315 [2024-05-16 20:36:38.280222] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:25.315 qpair failed and we were unable to recover it. 00:27:25.315 [2024-05-16 20:36:38.290024] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.315 [2024-05-16 20:36:38.290059] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.315 [2024-05-16 20:36:38.290073] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.315 [2024-05-16 20:36:38.290080] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.315 [2024-05-16 20:36:38.290086] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:25.315 [2024-05-16 20:36:38.300358] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:25.315 qpair failed and we were unable to recover it. 00:27:25.574 [2024-05-16 20:36:38.310157] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.574 [2024-05-16 20:36:38.310195] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.574 [2024-05-16 20:36:38.310214] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.574 [2024-05-16 20:36:38.310221] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.574 [2024-05-16 20:36:38.310227] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:25.574 [2024-05-16 20:36:38.320527] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:25.574 qpair failed and we were unable to recover it. 00:27:25.574 [2024-05-16 20:36:38.330131] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.574 [2024-05-16 20:36:38.330166] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.574 [2024-05-16 20:36:38.330181] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.574 [2024-05-16 20:36:38.330188] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.574 [2024-05-16 20:36:38.330194] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:25.574 [2024-05-16 20:36:38.340494] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:25.574 qpair failed and we were unable to recover it. 00:27:25.574 [2024-05-16 20:36:38.350115] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.574 [2024-05-16 20:36:38.350150] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.574 [2024-05-16 20:36:38.350167] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.574 [2024-05-16 20:36:38.350174] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.574 [2024-05-16 20:36:38.350179] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:25.574 [2024-05-16 20:36:38.360432] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:25.574 qpair failed and we were unable to recover it. 00:27:25.574 [2024-05-16 20:36:38.370236] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.574 [2024-05-16 20:36:38.370280] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.574 [2024-05-16 20:36:38.370294] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.574 [2024-05-16 20:36:38.370300] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.574 [2024-05-16 20:36:38.370306] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:25.574 [2024-05-16 20:36:38.380529] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:25.574 qpair failed and we were unable to recover it. 00:27:25.574 [2024-05-16 20:36:38.390320] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.574 [2024-05-16 20:36:38.390353] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.574 [2024-05-16 20:36:38.390366] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.574 [2024-05-16 20:36:38.390372] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.574 [2024-05-16 20:36:38.390378] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:25.574 [2024-05-16 20:36:38.400761] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:25.574 qpair failed and we were unable to recover it. 00:27:25.574 [2024-05-16 20:36:38.410322] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.574 [2024-05-16 20:36:38.410360] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.574 [2024-05-16 20:36:38.410375] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.574 [2024-05-16 20:36:38.410381] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.574 [2024-05-16 20:36:38.410387] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:25.574 [2024-05-16 20:36:38.420762] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:25.574 qpair failed and we were unable to recover it. 00:27:25.574 [2024-05-16 20:36:38.430463] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.574 [2024-05-16 20:36:38.430501] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.574 [2024-05-16 20:36:38.430516] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.574 [2024-05-16 20:36:38.430522] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.574 [2024-05-16 20:36:38.430528] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:25.574 [2024-05-16 20:36:38.440848] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:25.574 qpair failed and we were unable to recover it. 00:27:25.574 [2024-05-16 20:36:38.450562] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.574 [2024-05-16 20:36:38.450604] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.574 [2024-05-16 20:36:38.450618] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.574 [2024-05-16 20:36:38.450624] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.574 [2024-05-16 20:36:38.450630] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:25.574 [2024-05-16 20:36:38.460909] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:25.574 qpair failed and we were unable to recover it. 00:27:25.574 [2024-05-16 20:36:38.470516] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.574 [2024-05-16 20:36:38.470553] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.574 [2024-05-16 20:36:38.470569] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.574 [2024-05-16 20:36:38.470575] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.574 [2024-05-16 20:36:38.470581] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:25.575 [2024-05-16 20:36:38.481049] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:25.575 qpair failed and we were unable to recover it. 00:27:25.575 [2024-05-16 20:36:38.490607] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.575 [2024-05-16 20:36:38.490644] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.575 [2024-05-16 20:36:38.490657] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.575 [2024-05-16 20:36:38.490664] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.575 [2024-05-16 20:36:38.490669] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:25.575 [2024-05-16 20:36:38.501034] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:25.575 qpair failed and we were unable to recover it. 00:27:25.575 [2024-05-16 20:36:38.510697] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.575 [2024-05-16 20:36:38.510734] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.575 [2024-05-16 20:36:38.510753] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.575 [2024-05-16 20:36:38.510760] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.575 [2024-05-16 20:36:38.510766] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:25.575 [2024-05-16 20:36:38.520948] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:25.575 qpair failed and we were unable to recover it. 00:27:25.575 [2024-05-16 20:36:38.530694] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.575 [2024-05-16 20:36:38.530728] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.575 [2024-05-16 20:36:38.530746] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.575 [2024-05-16 20:36:38.530752] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.575 [2024-05-16 20:36:38.530758] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:25.575 [2024-05-16 20:36:38.541253] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:25.575 qpair failed and we were unable to recover it. 00:27:25.575 [2024-05-16 20:36:38.550810] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.575 [2024-05-16 20:36:38.550844] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.575 [2024-05-16 20:36:38.550858] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.575 [2024-05-16 20:36:38.550865] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.575 [2024-05-16 20:36:38.550871] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:25.575 [2024-05-16 20:36:38.561196] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:25.575 qpair failed and we were unable to recover it. 00:27:25.833 [2024-05-16 20:36:38.570856] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.833 [2024-05-16 20:36:38.570894] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.833 [2024-05-16 20:36:38.570914] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.833 [2024-05-16 20:36:38.570920] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.833 [2024-05-16 20:36:38.570926] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:25.833 [2024-05-16 20:36:38.581472] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:25.833 qpair failed and we were unable to recover it. 00:27:25.833 [2024-05-16 20:36:38.590881] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.833 [2024-05-16 20:36:38.590920] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.833 [2024-05-16 20:36:38.590934] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.833 [2024-05-16 20:36:38.590940] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.833 [2024-05-16 20:36:38.590946] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:25.833 [2024-05-16 20:36:38.601234] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:25.833 qpair failed and we were unable to recover it. 00:27:25.833 [2024-05-16 20:36:38.611007] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.833 [2024-05-16 20:36:38.611041] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.833 [2024-05-16 20:36:38.611056] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.833 [2024-05-16 20:36:38.611063] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.833 [2024-05-16 20:36:38.611079] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:25.833 [2024-05-16 20:36:38.621319] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:25.833 qpair failed and we were unable to recover it. 00:27:25.833 [2024-05-16 20:36:38.631057] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.833 [2024-05-16 20:36:38.631095] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.833 [2024-05-16 20:36:38.631109] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.833 [2024-05-16 20:36:38.631115] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.833 [2024-05-16 20:36:38.631121] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:25.833 [2024-05-16 20:36:38.641442] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:25.833 qpair failed and we were unable to recover it. 00:27:25.833 [2024-05-16 20:36:38.651107] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.833 [2024-05-16 20:36:38.651139] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.833 [2024-05-16 20:36:38.651153] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.833 [2024-05-16 20:36:38.651159] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.833 [2024-05-16 20:36:38.651165] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:25.834 [2024-05-16 20:36:38.661543] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:25.834 qpair failed and we were unable to recover it. 00:27:25.834 [2024-05-16 20:36:38.671197] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.834 [2024-05-16 20:36:38.671235] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.834 [2024-05-16 20:36:38.671249] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.834 [2024-05-16 20:36:38.671255] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.834 [2024-05-16 20:36:38.671261] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:25.834 [2024-05-16 20:36:38.681690] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:25.834 qpair failed and we were unable to recover it. 00:27:25.834 [2024-05-16 20:36:38.691314] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.834 [2024-05-16 20:36:38.691355] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.834 [2024-05-16 20:36:38.691368] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.834 [2024-05-16 20:36:38.691375] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.834 [2024-05-16 20:36:38.691380] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:25.834 [2024-05-16 20:36:38.701878] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:25.834 qpair failed and we were unable to recover it. 00:27:25.834 [2024-05-16 20:36:38.711230] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.834 [2024-05-16 20:36:38.711273] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.834 [2024-05-16 20:36:38.711288] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.834 [2024-05-16 20:36:38.711294] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.834 [2024-05-16 20:36:38.711300] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:25.834 [2024-05-16 20:36:38.721696] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:25.834 qpair failed and we were unable to recover it. 00:27:25.834 [2024-05-16 20:36:38.731378] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.834 [2024-05-16 20:36:38.731413] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.834 [2024-05-16 20:36:38.731435] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.834 [2024-05-16 20:36:38.731443] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.834 [2024-05-16 20:36:38.731449] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:25.834 [2024-05-16 20:36:38.741759] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:25.834 qpair failed and we were unable to recover it. 00:27:25.834 [2024-05-16 20:36:38.751461] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.834 [2024-05-16 20:36:38.751505] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.834 [2024-05-16 20:36:38.751519] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.834 [2024-05-16 20:36:38.751526] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.834 [2024-05-16 20:36:38.751532] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:25.834 [2024-05-16 20:36:38.762092] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:25.834 qpair failed and we were unable to recover it. 00:27:25.834 [2024-05-16 20:36:38.771470] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.834 [2024-05-16 20:36:38.771510] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.834 [2024-05-16 20:36:38.771526] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.834 [2024-05-16 20:36:38.771533] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.834 [2024-05-16 20:36:38.771538] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:25.834 [2024-05-16 20:36:38.782013] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:25.834 qpair failed and we were unable to recover it. 00:27:25.834 [2024-05-16 20:36:38.791604] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.834 [2024-05-16 20:36:38.791643] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.834 [2024-05-16 20:36:38.791663] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.834 [2024-05-16 20:36:38.791670] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.834 [2024-05-16 20:36:38.791676] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:25.834 [2024-05-16 20:36:38.802100] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:25.834 qpair failed and we were unable to recover it. 00:27:25.834 [2024-05-16 20:36:38.811631] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.834 [2024-05-16 20:36:38.811660] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.834 [2024-05-16 20:36:38.811676] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.834 [2024-05-16 20:36:38.811682] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.834 [2024-05-16 20:36:38.811688] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:25.834 [2024-05-16 20:36:38.822073] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:25.834 qpair failed and we were unable to recover it. 00:27:26.092 [2024-05-16 20:36:38.831794] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.092 [2024-05-16 20:36:38.831831] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.092 [2024-05-16 20:36:38.831866] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.092 [2024-05-16 20:36:38.831874] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.093 [2024-05-16 20:36:38.831880] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:26.093 [2024-05-16 20:36:38.842189] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:26.093 qpair failed and we were unable to recover it. 00:27:26.093 [2024-05-16 20:36:38.851720] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.093 [2024-05-16 20:36:38.851764] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.093 [2024-05-16 20:36:38.851778] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.093 [2024-05-16 20:36:38.851785] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.093 [2024-05-16 20:36:38.851790] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:26.093 [2024-05-16 20:36:38.862417] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:26.093 qpair failed and we were unable to recover it. 00:27:26.093 [2024-05-16 20:36:38.871821] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.093 [2024-05-16 20:36:38.871853] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.093 [2024-05-16 20:36:38.871867] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.093 [2024-05-16 20:36:38.871873] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.093 [2024-05-16 20:36:38.871879] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:26.093 [2024-05-16 20:36:38.882187] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:26.093 qpair failed and we were unable to recover it. 00:27:26.093 [2024-05-16 20:36:38.891851] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.093 [2024-05-16 20:36:38.891890] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.093 [2024-05-16 20:36:38.891904] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.093 [2024-05-16 20:36:38.891911] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.093 [2024-05-16 20:36:38.891916] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:26.093 [2024-05-16 20:36:38.902271] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:26.093 qpair failed and we were unable to recover it. 00:27:26.093 [2024-05-16 20:36:38.911946] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.093 [2024-05-16 20:36:38.911981] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.093 [2024-05-16 20:36:38.911996] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.093 [2024-05-16 20:36:38.912002] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.093 [2024-05-16 20:36:38.912008] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:26.093 [2024-05-16 20:36:38.922406] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:26.093 qpair failed and we were unable to recover it. 00:27:26.093 [2024-05-16 20:36:38.931943] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.093 [2024-05-16 20:36:38.931983] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.093 [2024-05-16 20:36:38.931999] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.093 [2024-05-16 20:36:38.932005] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.093 [2024-05-16 20:36:38.932011] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:26.093 [2024-05-16 20:36:38.942769] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:26.093 qpair failed and we were unable to recover it. 00:27:26.093 [2024-05-16 20:36:38.952071] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.093 [2024-05-16 20:36:38.952110] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.093 [2024-05-16 20:36:38.952124] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.093 [2024-05-16 20:36:38.952131] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.093 [2024-05-16 20:36:38.952136] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:26.093 [2024-05-16 20:36:38.962402] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:26.093 qpair failed and we were unable to recover it. 00:27:26.093 [2024-05-16 20:36:38.972114] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.093 [2024-05-16 20:36:38.972147] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.093 [2024-05-16 20:36:38.972165] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.093 [2024-05-16 20:36:38.972171] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.093 [2024-05-16 20:36:38.972177] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:26.093 [2024-05-16 20:36:38.982607] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:26.093 qpair failed and we were unable to recover it. 00:27:26.093 [2024-05-16 20:36:38.992232] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.093 [2024-05-16 20:36:38.992271] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.093 [2024-05-16 20:36:38.992286] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.093 [2024-05-16 20:36:38.992292] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.093 [2024-05-16 20:36:38.992298] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:26.093 [2024-05-16 20:36:39.002565] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:26.093 qpair failed and we were unable to recover it. 00:27:26.093 [2024-05-16 20:36:39.012239] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.093 [2024-05-16 20:36:39.012278] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.093 [2024-05-16 20:36:39.012297] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.093 [2024-05-16 20:36:39.012305] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.093 [2024-05-16 20:36:39.012311] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:26.093 [2024-05-16 20:36:39.022564] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:26.093 qpair failed and we were unable to recover it. 00:27:26.093 [2024-05-16 20:36:39.032215] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.093 [2024-05-16 20:36:39.032255] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.093 [2024-05-16 20:36:39.032269] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.093 [2024-05-16 20:36:39.032276] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.093 [2024-05-16 20:36:39.032282] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:26.093 [2024-05-16 20:36:39.042553] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:26.093 qpair failed and we were unable to recover it. 00:27:26.093 [2024-05-16 20:36:39.052440] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.093 [2024-05-16 20:36:39.052476] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.093 [2024-05-16 20:36:39.052490] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.093 [2024-05-16 20:36:39.052497] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.093 [2024-05-16 20:36:39.052506] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:26.093 [2024-05-16 20:36:39.062947] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:26.093 qpair failed and we were unable to recover it. 00:27:26.093 [2024-05-16 20:36:39.072417] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.093 [2024-05-16 20:36:39.072461] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.093 [2024-05-16 20:36:39.072475] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.093 [2024-05-16 20:36:39.072481] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.093 [2024-05-16 20:36:39.072486] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:26.093 [2024-05-16 20:36:39.083054] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:26.093 qpair failed and we were unable to recover it. 00:27:26.352 [2024-05-16 20:36:39.092582] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.352 [2024-05-16 20:36:39.092626] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.352 [2024-05-16 20:36:39.092644] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.352 [2024-05-16 20:36:39.092651] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.352 [2024-05-16 20:36:39.092657] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:26.352 [2024-05-16 20:36:39.102871] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:26.352 qpair failed and we were unable to recover it. 00:27:26.352 [2024-05-16 20:36:39.112517] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.352 [2024-05-16 20:36:39.112552] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.352 [2024-05-16 20:36:39.112568] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.352 [2024-05-16 20:36:39.112574] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.352 [2024-05-16 20:36:39.112579] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:26.352 [2024-05-16 20:36:39.122995] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:26.352 qpair failed and we were unable to recover it. 00:27:26.352 [2024-05-16 20:36:39.132679] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.352 [2024-05-16 20:36:39.132718] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.352 [2024-05-16 20:36:39.132733] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.352 [2024-05-16 20:36:39.132739] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.352 [2024-05-16 20:36:39.132745] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:26.352 [2024-05-16 20:36:39.143199] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:26.352 qpair failed and we were unable to recover it. 00:27:26.352 [2024-05-16 20:36:39.152769] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.352 [2024-05-16 20:36:39.152806] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.352 [2024-05-16 20:36:39.152822] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.352 [2024-05-16 20:36:39.152829] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.352 [2024-05-16 20:36:39.152835] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:26.352 [2024-05-16 20:36:39.163139] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:26.352 qpair failed and we were unable to recover it. 00:27:26.352 [2024-05-16 20:36:39.172813] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.352 [2024-05-16 20:36:39.172854] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.352 [2024-05-16 20:36:39.172868] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.352 [2024-05-16 20:36:39.172875] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.352 [2024-05-16 20:36:39.172881] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:26.352 [2024-05-16 20:36:39.183343] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:26.352 qpair failed and we were unable to recover it. 00:27:26.352 [2024-05-16 20:36:39.192962] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.352 [2024-05-16 20:36:39.192999] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.352 [2024-05-16 20:36:39.193012] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.352 [2024-05-16 20:36:39.193019] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.352 [2024-05-16 20:36:39.193024] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:26.352 [2024-05-16 20:36:39.203226] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:26.352 qpair failed and we were unable to recover it. 00:27:26.352 [2024-05-16 20:36:39.213036] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.352 [2024-05-16 20:36:39.213067] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.352 [2024-05-16 20:36:39.213083] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.352 [2024-05-16 20:36:39.213089] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.352 [2024-05-16 20:36:39.213095] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:26.352 [2024-05-16 20:36:39.223378] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:26.352 qpair failed and we were unable to recover it. 00:27:26.352 [2024-05-16 20:36:39.233016] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.352 [2024-05-16 20:36:39.233053] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.352 [2024-05-16 20:36:39.233070] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.352 [2024-05-16 20:36:39.233077] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.352 [2024-05-16 20:36:39.233082] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:26.352 [2024-05-16 20:36:39.243339] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:26.352 qpair failed and we were unable to recover it. 00:27:26.352 [2024-05-16 20:36:39.253052] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.352 [2024-05-16 20:36:39.253095] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.352 [2024-05-16 20:36:39.253111] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.352 [2024-05-16 20:36:39.253117] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.352 [2024-05-16 20:36:39.253123] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:26.352 [2024-05-16 20:36:39.263475] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:26.352 qpair failed and we were unable to recover it. 00:27:26.352 [2024-05-16 20:36:39.273117] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.352 [2024-05-16 20:36:39.273155] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.352 [2024-05-16 20:36:39.273170] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.352 [2024-05-16 20:36:39.273177] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.352 [2024-05-16 20:36:39.273182] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:26.352 [2024-05-16 20:36:39.283538] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:26.352 qpair failed and we were unable to recover it. 00:27:26.352 [2024-05-16 20:36:39.293102] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.352 [2024-05-16 20:36:39.293142] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.352 [2024-05-16 20:36:39.293156] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.352 [2024-05-16 20:36:39.293163] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.352 [2024-05-16 20:36:39.293168] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:26.352 [2024-05-16 20:36:39.303588] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:26.352 qpair failed and we were unable to recover it. 00:27:26.352 [2024-05-16 20:36:39.313175] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.352 [2024-05-16 20:36:39.313216] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.352 [2024-05-16 20:36:39.313231] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.352 [2024-05-16 20:36:39.313237] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.352 [2024-05-16 20:36:39.313242] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:26.352 [2024-05-16 20:36:39.323613] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:26.352 qpair failed and we were unable to recover it. 00:27:26.352 [2024-05-16 20:36:39.333208] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.352 [2024-05-16 20:36:39.333247] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.352 [2024-05-16 20:36:39.333262] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.352 [2024-05-16 20:36:39.333268] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.352 [2024-05-16 20:36:39.333274] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:26.352 [2024-05-16 20:36:39.343656] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:26.352 qpair failed and we were unable to recover it. 00:27:26.641 [2024-05-16 20:36:39.353325] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.641 [2024-05-16 20:36:39.353364] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.641 [2024-05-16 20:36:39.353382] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.641 [2024-05-16 20:36:39.353388] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.641 [2024-05-16 20:36:39.353394] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:26.641 [2024-05-16 20:36:39.363677] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:26.641 qpair failed and we were unable to recover it. 00:27:26.641 [2024-05-16 20:36:39.373388] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.641 [2024-05-16 20:36:39.373426] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.641 [2024-05-16 20:36:39.373441] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.641 [2024-05-16 20:36:39.373448] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.641 [2024-05-16 20:36:39.373453] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:26.641 [2024-05-16 20:36:39.383871] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:26.641 qpair failed and we were unable to recover it. 00:27:26.641 [2024-05-16 20:36:39.393466] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.641 [2024-05-16 20:36:39.393504] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.641 [2024-05-16 20:36:39.393518] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.641 [2024-05-16 20:36:39.393525] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.641 [2024-05-16 20:36:39.393530] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:26.641 [2024-05-16 20:36:39.403814] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:26.641 qpair failed and we were unable to recover it. 00:27:26.641 [2024-05-16 20:36:39.413436] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.641 [2024-05-16 20:36:39.413477] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.641 [2024-05-16 20:36:39.413495] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.641 [2024-05-16 20:36:39.413502] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.642 [2024-05-16 20:36:39.413507] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:26.642 [2024-05-16 20:36:39.423827] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:26.642 qpair failed and we were unable to recover it. 00:27:26.642 [2024-05-16 20:36:39.433513] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.642 [2024-05-16 20:36:39.433554] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.642 [2024-05-16 20:36:39.433568] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.642 [2024-05-16 20:36:39.433574] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.642 [2024-05-16 20:36:39.433580] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:26.642 [2024-05-16 20:36:39.443905] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:26.642 qpair failed and we were unable to recover it. 00:27:26.642 [2024-05-16 20:36:39.453540] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.642 [2024-05-16 20:36:39.453574] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.642 [2024-05-16 20:36:39.453589] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.642 [2024-05-16 20:36:39.453596] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.642 [2024-05-16 20:36:39.453601] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:26.642 [2024-05-16 20:36:39.463987] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:26.642 qpair failed and we were unable to recover it. 00:27:26.642 [2024-05-16 20:36:39.473705] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.642 [2024-05-16 20:36:39.473742] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.642 [2024-05-16 20:36:39.473756] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.642 [2024-05-16 20:36:39.473762] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.642 [2024-05-16 20:36:39.473768] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:26.642 [2024-05-16 20:36:39.484168] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:26.642 qpair failed and we were unable to recover it. 00:27:26.642 [2024-05-16 20:36:39.493803] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.642 [2024-05-16 20:36:39.493848] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.642 [2024-05-16 20:36:39.493863] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.642 [2024-05-16 20:36:39.493870] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.642 [2024-05-16 20:36:39.493879] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:26.642 [2024-05-16 20:36:39.504033] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:26.642 qpair failed and we were unable to recover it. 00:27:26.642 [2024-05-16 20:36:39.513866] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.642 [2024-05-16 20:36:39.513909] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.642 [2024-05-16 20:36:39.513926] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.642 [2024-05-16 20:36:39.513933] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.642 [2024-05-16 20:36:39.513939] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:26.642 [2024-05-16 20:36:39.524245] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:26.642 qpair failed and we were unable to recover it. 00:27:26.642 [2024-05-16 20:36:39.533787] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.642 [2024-05-16 20:36:39.533823] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.642 [2024-05-16 20:36:39.533838] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.642 [2024-05-16 20:36:39.533845] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.642 [2024-05-16 20:36:39.533851] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:26.642 [2024-05-16 20:36:39.544379] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:26.642 qpair failed and we were unable to recover it. 00:27:26.642 [2024-05-16 20:36:39.553909] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.642 [2024-05-16 20:36:39.553949] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.642 [2024-05-16 20:36:39.553964] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.642 [2024-05-16 20:36:39.553970] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.642 [2024-05-16 20:36:39.553976] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:26.642 [2024-05-16 20:36:39.564249] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:26.642 qpair failed and we were unable to recover it. 00:27:26.642 [2024-05-16 20:36:39.573883] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.642 [2024-05-16 20:36:39.573921] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.642 [2024-05-16 20:36:39.573936] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.642 [2024-05-16 20:36:39.573943] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.642 [2024-05-16 20:36:39.573948] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:26.642 [2024-05-16 20:36:39.584440] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:26.642 qpair failed and we were unable to recover it. 00:27:26.642 [2024-05-16 20:36:39.594045] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.642 [2024-05-16 20:36:39.594081] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.642 [2024-05-16 20:36:39.594096] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.642 [2024-05-16 20:36:39.594102] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.642 [2024-05-16 20:36:39.594108] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:26.642 [2024-05-16 20:36:39.604409] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:26.642 qpair failed and we were unable to recover it. 00:27:26.642 [2024-05-16 20:36:39.614067] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.642 [2024-05-16 20:36:39.614105] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.642 [2024-05-16 20:36:39.614121] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.642 [2024-05-16 20:36:39.614128] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.642 [2024-05-16 20:36:39.614134] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:26.642 [2024-05-16 20:36:39.624257] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:26.642 qpair failed and we were unable to recover it. 00:27:26.934 [2024-05-16 20:36:39.634120] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.934 [2024-05-16 20:36:39.634159] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.934 [2024-05-16 20:36:39.634194] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.934 [2024-05-16 20:36:39.634202] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.934 [2024-05-16 20:36:39.634208] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:26.934 [2024-05-16 20:36:39.644499] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:26.934 qpair failed and we were unable to recover it. 00:27:26.934 [2024-05-16 20:36:39.654174] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.934 [2024-05-16 20:36:39.654214] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.934 [2024-05-16 20:36:39.654229] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.934 [2024-05-16 20:36:39.654236] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.934 [2024-05-16 20:36:39.654242] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:26.934 [2024-05-16 20:36:39.664626] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:26.934 qpair failed and we were unable to recover it. 00:27:26.934 [2024-05-16 20:36:39.674191] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.934 [2024-05-16 20:36:39.674230] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.934 [2024-05-16 20:36:39.674247] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.934 [2024-05-16 20:36:39.674254] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.934 [2024-05-16 20:36:39.674259] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:26.935 [2024-05-16 20:36:39.684781] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:26.935 qpair failed and we were unable to recover it. 00:27:26.935 [2024-05-16 20:36:39.694276] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.935 [2024-05-16 20:36:39.694314] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.935 [2024-05-16 20:36:39.694329] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.935 [2024-05-16 20:36:39.694335] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.935 [2024-05-16 20:36:39.694341] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:26.935 [2024-05-16 20:36:39.704870] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:26.935 qpair failed and we were unable to recover it. 00:27:26.935 [2024-05-16 20:36:39.714303] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.935 [2024-05-16 20:36:39.714340] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.935 [2024-05-16 20:36:39.714355] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.935 [2024-05-16 20:36:39.714361] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.935 [2024-05-16 20:36:39.714367] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:26.935 [2024-05-16 20:36:39.724757] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:26.935 qpair failed and we were unable to recover it. 00:27:26.935 [2024-05-16 20:36:39.734467] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.935 [2024-05-16 20:36:39.734508] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.935 [2024-05-16 20:36:39.734523] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.935 [2024-05-16 20:36:39.734529] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.935 [2024-05-16 20:36:39.734535] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:26.935 [2024-05-16 20:36:39.744926] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:26.935 qpair failed and we were unable to recover it. 00:27:26.935 [2024-05-16 20:36:39.754430] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.935 [2024-05-16 20:36:39.754463] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.935 [2024-05-16 20:36:39.754477] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.935 [2024-05-16 20:36:39.754484] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.935 [2024-05-16 20:36:39.754490] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:26.935 [2024-05-16 20:36:39.765024] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:26.935 qpair failed and we were unable to recover it. 00:27:26.935 [2024-05-16 20:36:39.774522] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.935 [2024-05-16 20:36:39.774557] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.935 [2024-05-16 20:36:39.774573] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.935 [2024-05-16 20:36:39.774580] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.935 [2024-05-16 20:36:39.774586] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:26.935 [2024-05-16 20:36:39.785042] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:26.935 qpair failed and we were unable to recover it. 00:27:26.935 [2024-05-16 20:36:39.794557] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.935 [2024-05-16 20:36:39.794594] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.935 [2024-05-16 20:36:39.794609] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.935 [2024-05-16 20:36:39.794616] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.935 [2024-05-16 20:36:39.794621] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:26.935 [2024-05-16 20:36:39.804941] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:26.935 qpair failed and we were unable to recover it. 00:27:26.935 [2024-05-16 20:36:39.814618] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.935 [2024-05-16 20:36:39.814658] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.935 [2024-05-16 20:36:39.814673] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.935 [2024-05-16 20:36:39.814680] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.935 [2024-05-16 20:36:39.814686] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:26.935 [2024-05-16 20:36:39.825070] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:26.935 qpair failed and we were unable to recover it. 00:27:26.935 [2024-05-16 20:36:39.834750] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.935 [2024-05-16 20:36:39.834787] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.935 [2024-05-16 20:36:39.834802] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.935 [2024-05-16 20:36:39.834808] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.935 [2024-05-16 20:36:39.834814] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:26.935 [2024-05-16 20:36:39.845071] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:26.935 qpair failed and we were unable to recover it. 00:27:26.935 [2024-05-16 20:36:39.854791] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.935 [2024-05-16 20:36:39.854829] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.935 [2024-05-16 20:36:39.854848] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.935 [2024-05-16 20:36:39.854854] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.935 [2024-05-16 20:36:39.854860] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:26.935 [2024-05-16 20:36:39.865104] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:26.935 qpair failed and we were unable to recover it. 00:27:26.935 [2024-05-16 20:36:39.874805] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.935 [2024-05-16 20:36:39.874842] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.935 [2024-05-16 20:36:39.874857] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.935 [2024-05-16 20:36:39.874863] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.935 [2024-05-16 20:36:39.874869] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:26.935 [2024-05-16 20:36:39.885252] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:26.935 qpair failed and we were unable to recover it. 00:27:26.935 [2024-05-16 20:36:39.894995] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.935 [2024-05-16 20:36:39.895035] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.935 [2024-05-16 20:36:39.895049] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.935 [2024-05-16 20:36:39.895056] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.935 [2024-05-16 20:36:39.895062] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:26.935 [2024-05-16 20:36:39.905389] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:26.935 qpair failed and we were unable to recover it. 00:27:26.935 [2024-05-16 20:36:39.914871] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.935 [2024-05-16 20:36:39.914905] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.935 [2024-05-16 20:36:39.914924] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.935 [2024-05-16 20:36:39.914931] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.935 [2024-05-16 20:36:39.914937] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:26.935 [2024-05-16 20:36:39.925493] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:26.935 qpair failed and we were unable to recover it. 00:27:27.194 [2024-05-16 20:36:39.934911] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.194 [2024-05-16 20:36:39.934946] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.194 [2024-05-16 20:36:39.934965] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.194 [2024-05-16 20:36:39.934972] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.194 [2024-05-16 20:36:39.934982] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:27.194 [2024-05-16 20:36:39.945574] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:27.194 qpair failed and we were unable to recover it. 00:27:27.194 [2024-05-16 20:36:39.955087] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.194 [2024-05-16 20:36:39.955123] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.194 [2024-05-16 20:36:39.955138] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.194 [2024-05-16 20:36:39.955145] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.194 [2024-05-16 20:36:39.955151] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:27.194 [2024-05-16 20:36:39.965495] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:27.194 qpair failed and we were unable to recover it. 00:27:27.194 [2024-05-16 20:36:39.975122] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.194 [2024-05-16 20:36:39.975165] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.194 [2024-05-16 20:36:39.975179] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.194 [2024-05-16 20:36:39.975186] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.194 [2024-05-16 20:36:39.975192] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:27.194 [2024-05-16 20:36:39.985627] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:27.194 qpair failed and we were unable to recover it. 00:27:27.194 [2024-05-16 20:36:39.995242] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.194 [2024-05-16 20:36:39.995275] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.194 [2024-05-16 20:36:39.995290] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.194 [2024-05-16 20:36:39.995297] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.195 [2024-05-16 20:36:39.995303] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:27.195 [2024-05-16 20:36:40.005410] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:27.195 qpair failed and we were unable to recover it. 00:27:27.195 [2024-05-16 20:36:40.015549] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.195 [2024-05-16 20:36:40.015592] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.195 [2024-05-16 20:36:40.015612] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.195 [2024-05-16 20:36:40.015620] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.195 [2024-05-16 20:36:40.015627] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:27.195 [2024-05-16 20:36:40.025566] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:27.195 qpair failed and we were unable to recover it. 00:27:27.195 [2024-05-16 20:36:40.035204] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.195 [2024-05-16 20:36:40.035244] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.195 [2024-05-16 20:36:40.035259] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.195 [2024-05-16 20:36:40.035265] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.195 [2024-05-16 20:36:40.035271] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:27.195 [2024-05-16 20:36:40.046991] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:27.195 qpair failed and we were unable to recover it. 00:27:27.195 [2024-05-16 20:36:40.055454] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.195 [2024-05-16 20:36:40.055495] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.195 [2024-05-16 20:36:40.055512] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.195 [2024-05-16 20:36:40.055520] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.195 [2024-05-16 20:36:40.055526] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:27.195 [2024-05-16 20:36:40.065793] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:27.195 qpair failed and we were unable to recover it. 00:27:27.195 [2024-05-16 20:36:40.075418] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.195 [2024-05-16 20:36:40.075460] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.195 [2024-05-16 20:36:40.075476] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.195 [2024-05-16 20:36:40.075483] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.195 [2024-05-16 20:36:40.075490] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:27.195 [2024-05-16 20:36:40.085782] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:27.195 qpair failed and we were unable to recover it. 00:27:27.195 [2024-05-16 20:36:40.095519] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.195 [2024-05-16 20:36:40.095574] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.195 [2024-05-16 20:36:40.095589] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.195 [2024-05-16 20:36:40.095609] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.195 [2024-05-16 20:36:40.095615] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:27.195 [2024-05-16 20:36:40.106008] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:27.195 qpair failed and we were unable to recover it. 00:27:27.195 [2024-05-16 20:36:40.115597] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.195 [2024-05-16 20:36:40.115637] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.195 [2024-05-16 20:36:40.115659] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.195 [2024-05-16 20:36:40.115665] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.195 [2024-05-16 20:36:40.115671] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:27.195 [2024-05-16 20:36:40.126045] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:27.195 qpair failed and we were unable to recover it. 00:27:27.195 [2024-05-16 20:36:40.135621] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.195 [2024-05-16 20:36:40.135661] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.195 [2024-05-16 20:36:40.135677] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.195 [2024-05-16 20:36:40.135684] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.195 [2024-05-16 20:36:40.135690] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:27.195 [2024-05-16 20:36:40.146103] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:27.195 qpair failed and we were unable to recover it. 00:27:27.195 [2024-05-16 20:36:40.155706] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.195 [2024-05-16 20:36:40.155740] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.195 [2024-05-16 20:36:40.155756] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.195 [2024-05-16 20:36:40.155763] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.195 [2024-05-16 20:36:40.155769] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:27.195 [2024-05-16 20:36:40.166153] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:27.195 qpair failed and we were unable to recover it. 00:27:27.195 [2024-05-16 20:36:40.175759] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.195 [2024-05-16 20:36:40.175795] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.195 [2024-05-16 20:36:40.175811] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.195 [2024-05-16 20:36:40.175817] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.195 [2024-05-16 20:36:40.175823] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:27.195 [2024-05-16 20:36:40.186218] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:27.195 qpair failed and we were unable to recover it. 00:27:27.454 [2024-05-16 20:36:40.195859] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.454 [2024-05-16 20:36:40.195897] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.454 [2024-05-16 20:36:40.195915] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.454 [2024-05-16 20:36:40.195923] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.454 [2024-05-16 20:36:40.195928] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:27.454 [2024-05-16 20:36:40.206341] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:27.454 qpair failed and we were unable to recover it. 00:27:27.454 [2024-05-16 20:36:40.215852] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.454 [2024-05-16 20:36:40.215891] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.454 [2024-05-16 20:36:40.215907] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.454 [2024-05-16 20:36:40.215914] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.454 [2024-05-16 20:36:40.215920] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:27.454 [2024-05-16 20:36:40.226244] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:27.454 qpair failed and we were unable to recover it. 00:27:27.454 [2024-05-16 20:36:40.235961] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.454 [2024-05-16 20:36:40.235995] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.454 [2024-05-16 20:36:40.236010] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.454 [2024-05-16 20:36:40.236017] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.454 [2024-05-16 20:36:40.236023] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:27.454 [2024-05-16 20:36:40.246387] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:27.454 qpair failed and we were unable to recover it. 00:27:27.454 [2024-05-16 20:36:40.255990] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.454 [2024-05-16 20:36:40.256026] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.454 [2024-05-16 20:36:40.256041] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.454 [2024-05-16 20:36:40.256047] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.454 [2024-05-16 20:36:40.256053] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:27.454 [2024-05-16 20:36:40.266471] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:27.454 qpair failed and we were unable to recover it. 00:27:27.454 [2024-05-16 20:36:40.276033] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.454 [2024-05-16 20:36:40.276074] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.454 [2024-05-16 20:36:40.276089] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.454 [2024-05-16 20:36:40.276095] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.454 [2024-05-16 20:36:40.276101] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:27.454 [2024-05-16 20:36:40.286618] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:27.454 qpair failed and we were unable to recover it. 00:27:27.454 [2024-05-16 20:36:40.296125] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.454 [2024-05-16 20:36:40.296161] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.454 [2024-05-16 20:36:40.296180] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.454 [2024-05-16 20:36:40.296186] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.454 [2024-05-16 20:36:40.296192] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:27.454 [2024-05-16 20:36:40.306507] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:27.454 qpair failed and we were unable to recover it. 00:27:27.454 [2024-05-16 20:36:40.316147] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.454 [2024-05-16 20:36:40.316184] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.454 [2024-05-16 20:36:40.316198] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.454 [2024-05-16 20:36:40.316205] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.454 [2024-05-16 20:36:40.316210] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:27.454 [2024-05-16 20:36:40.326490] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:27.454 qpair failed and we were unable to recover it. 00:27:27.454 [2024-05-16 20:36:40.336165] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.454 [2024-05-16 20:36:40.336196] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.454 [2024-05-16 20:36:40.336211] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.454 [2024-05-16 20:36:40.336217] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.454 [2024-05-16 20:36:40.336223] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:27.454 [2024-05-16 20:36:40.346688] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:27.454 qpair failed and we were unable to recover it. 00:27:27.454 [2024-05-16 20:36:40.356293] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.454 [2024-05-16 20:36:40.356330] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.454 [2024-05-16 20:36:40.356345] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.454 [2024-05-16 20:36:40.356351] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.454 [2024-05-16 20:36:40.356357] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:27.454 [2024-05-16 20:36:40.366684] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:27.454 qpair failed and we were unable to recover it. 00:27:27.454 [2024-05-16 20:36:40.376319] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.454 [2024-05-16 20:36:40.376363] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.454 [2024-05-16 20:36:40.376377] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.454 [2024-05-16 20:36:40.376384] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.454 [2024-05-16 20:36:40.376392] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:27.454 [2024-05-16 20:36:40.386763] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:27.454 qpair failed and we were unable to recover it. 00:27:27.455 [2024-05-16 20:36:40.396444] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.455 [2024-05-16 20:36:40.396479] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.455 [2024-05-16 20:36:40.396494] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.455 [2024-05-16 20:36:40.396500] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.455 [2024-05-16 20:36:40.396506] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:27.455 [2024-05-16 20:36:40.407129] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:27.455 qpair failed and we were unable to recover it. 00:27:27.455 [2024-05-16 20:36:40.416595] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.455 [2024-05-16 20:36:40.416633] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.455 [2024-05-16 20:36:40.416648] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.455 [2024-05-16 20:36:40.416655] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.455 [2024-05-16 20:36:40.416661] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:27.455 [2024-05-16 20:36:40.426988] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:27.455 qpair failed and we were unable to recover it. 00:27:27.455 [2024-05-16 20:36:40.436578] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.455 [2024-05-16 20:36:40.436619] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.455 [2024-05-16 20:36:40.436633] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.455 [2024-05-16 20:36:40.436640] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.455 [2024-05-16 20:36:40.436645] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:27.713 [2024-05-16 20:36:40.447173] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:27.713 qpair failed and we were unable to recover it. 00:27:27.713 [2024-05-16 20:36:40.456634] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.713 [2024-05-16 20:36:40.456679] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.713 [2024-05-16 20:36:40.456696] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.713 [2024-05-16 20:36:40.456704] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.713 [2024-05-16 20:36:40.456709] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:27.713 [2024-05-16 20:36:40.467015] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:27.713 qpair failed and we were unable to recover it. 00:27:27.713 [2024-05-16 20:36:40.476641] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.713 [2024-05-16 20:36:40.476676] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.713 [2024-05-16 20:36:40.476692] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.713 [2024-05-16 20:36:40.476698] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.713 [2024-05-16 20:36:40.476704] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:27.713 [2024-05-16 20:36:40.487198] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:27.713 qpair failed and we were unable to recover it. 00:27:27.713 [2024-05-16 20:36:40.496783] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.713 [2024-05-16 20:36:40.496817] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.713 [2024-05-16 20:36:40.496832] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.713 [2024-05-16 20:36:40.496839] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.713 [2024-05-16 20:36:40.496844] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:27.713 [2024-05-16 20:36:40.507212] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:27.713 qpair failed and we were unable to recover it. 00:27:27.713 [2024-05-16 20:36:40.516816] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.713 [2024-05-16 20:36:40.516854] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.713 [2024-05-16 20:36:40.516871] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.713 [2024-05-16 20:36:40.516878] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.713 [2024-05-16 20:36:40.516884] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:27.713 [2024-05-16 20:36:40.527216] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:27.713 qpair failed and we were unable to recover it. 00:27:28.648 Write completed with error (sct=0, sc=8) 00:27:28.648 starting I/O failed 00:27:28.648 Read completed with error (sct=0, sc=8) 00:27:28.648 starting I/O failed 00:27:28.648 Read completed with error (sct=0, sc=8) 00:27:28.648 starting I/O failed 00:27:28.648 Write completed with error (sct=0, sc=8) 00:27:28.648 starting I/O failed 00:27:28.648 Read completed with error (sct=0, sc=8) 00:27:28.648 starting I/O failed 00:27:28.648 Write completed with error (sct=0, sc=8) 00:27:28.648 starting I/O failed 00:27:28.648 Write completed with error (sct=0, sc=8) 00:27:28.648 starting I/O failed 00:27:28.648 Read completed with error (sct=0, sc=8) 00:27:28.648 starting I/O failed 00:27:28.648 Read completed with error (sct=0, sc=8) 00:27:28.648 starting I/O failed 00:27:28.648 Write completed with error (sct=0, sc=8) 00:27:28.648 starting I/O failed 00:27:28.648 Write completed with error (sct=0, sc=8) 00:27:28.648 starting I/O failed 00:27:28.648 Write completed with error (sct=0, sc=8) 00:27:28.648 starting I/O failed 00:27:28.648 Read completed with error (sct=0, sc=8) 00:27:28.648 starting I/O failed 00:27:28.648 Read completed with error (sct=0, sc=8) 00:27:28.648 starting I/O failed 00:27:28.648 Read completed with error (sct=0, sc=8) 00:27:28.648 starting I/O failed 00:27:28.648 Write completed with error (sct=0, sc=8) 00:27:28.648 starting I/O failed 00:27:28.648 Read completed with error (sct=0, sc=8) 00:27:28.648 starting I/O failed 00:27:28.648 Write completed with error (sct=0, sc=8) 00:27:28.648 starting I/O failed 00:27:28.648 Read completed with error (sct=0, sc=8) 00:27:28.648 starting I/O failed 00:27:28.648 Read completed with error (sct=0, sc=8) 00:27:28.648 starting I/O failed 00:27:28.648 Read completed with error (sct=0, sc=8) 00:27:28.648 starting I/O failed 00:27:28.648 Read completed with error (sct=0, sc=8) 00:27:28.648 starting I/O failed 00:27:28.648 Write completed with error (sct=0, sc=8) 00:27:28.648 starting I/O failed 00:27:28.648 Write completed with error (sct=0, sc=8) 00:27:28.648 starting I/O failed 00:27:28.648 Write completed with error (sct=0, sc=8) 00:27:28.648 starting I/O failed 00:27:28.648 Read completed with error (sct=0, sc=8) 00:27:28.648 starting I/O failed 00:27:28.648 Write completed with error (sct=0, sc=8) 00:27:28.648 starting I/O failed 00:27:28.648 Read completed with error (sct=0, sc=8) 00:27:28.648 starting I/O failed 00:27:28.648 Write completed with error (sct=0, sc=8) 00:27:28.648 starting I/O failed 00:27:28.648 Write completed with error (sct=0, sc=8) 00:27:28.648 starting I/O failed 00:27:28.648 Write completed with error (sct=0, sc=8) 00:27:28.648 starting I/O failed 00:27:28.648 Write completed with error (sct=0, sc=8) 00:27:28.648 starting I/O failed 00:27:28.648 [2024-05-16 20:36:41.532268] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:28.648 [2024-05-16 20:36:41.539588] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.648 [2024-05-16 20:36:41.539627] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.648 [2024-05-16 20:36:41.539643] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.648 [2024-05-16 20:36:41.539651] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.648 [2024-05-16 20:36:41.539657] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d0180 00:27:28.648 [2024-05-16 20:36:41.550224] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:28.648 qpair failed and we were unable to recover it. 00:27:28.648 [2024-05-16 20:36:41.560088] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.648 [2024-05-16 20:36:41.560128] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.648 [2024-05-16 20:36:41.560142] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.648 [2024-05-16 20:36:41.560149] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.648 [2024-05-16 20:36:41.560155] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d0180 00:27:28.648 [2024-05-16 20:36:41.570306] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:28.648 qpair failed and we were unable to recover it. 00:27:29.581 Read completed with error (sct=0, sc=8) 00:27:29.581 starting I/O failed 00:27:29.581 Read completed with error (sct=0, sc=8) 00:27:29.581 starting I/O failed 00:27:29.581 Write completed with error (sct=0, sc=8) 00:27:29.581 starting I/O failed 00:27:29.581 Write completed with error (sct=0, sc=8) 00:27:29.581 starting I/O failed 00:27:29.581 Read completed with error (sct=0, sc=8) 00:27:29.581 starting I/O failed 00:27:29.581 Write completed with error (sct=0, sc=8) 00:27:29.581 starting I/O failed 00:27:29.581 Read completed with error (sct=0, sc=8) 00:27:29.581 starting I/O failed 00:27:29.581 Write completed with error (sct=0, sc=8) 00:27:29.581 starting I/O failed 00:27:29.581 Write completed with error (sct=0, sc=8) 00:27:29.581 starting I/O failed 00:27:29.581 Write completed with error (sct=0, sc=8) 00:27:29.581 starting I/O failed 00:27:29.581 Read completed with error (sct=0, sc=8) 00:27:29.581 starting I/O failed 00:27:29.581 Write completed with error (sct=0, sc=8) 00:27:29.581 starting I/O failed 00:27:29.581 Write completed with error (sct=0, sc=8) 00:27:29.581 starting I/O failed 00:27:29.581 Read completed with error (sct=0, sc=8) 00:27:29.581 starting I/O failed 00:27:29.581 Write completed with error (sct=0, sc=8) 00:27:29.581 starting I/O failed 00:27:29.581 Write completed with error (sct=0, sc=8) 00:27:29.581 starting I/O failed 00:27:29.581 Write completed with error (sct=0, sc=8) 00:27:29.581 starting I/O failed 00:27:29.581 Read completed with error (sct=0, sc=8) 00:27:29.581 starting I/O failed 00:27:29.581 Write completed with error (sct=0, sc=8) 00:27:29.581 starting I/O failed 00:27:29.581 Write completed with error (sct=0, sc=8) 00:27:29.581 starting I/O failed 00:27:29.581 Write completed with error (sct=0, sc=8) 00:27:29.581 starting I/O failed 00:27:29.581 Write completed with error (sct=0, sc=8) 00:27:29.581 starting I/O failed 00:27:29.581 Write completed with error (sct=0, sc=8) 00:27:29.581 starting I/O failed 00:27:29.581 Read completed with error (sct=0, sc=8) 00:27:29.581 starting I/O failed 00:27:29.581 Read completed with error (sct=0, sc=8) 00:27:29.581 starting I/O failed 00:27:29.581 Read completed with error (sct=0, sc=8) 00:27:29.581 starting I/O failed 00:27:29.581 Write completed with error (sct=0, sc=8) 00:27:29.581 starting I/O failed 00:27:29.581 Read completed with error (sct=0, sc=8) 00:27:29.581 starting I/O failed 00:27:29.581 Write completed with error (sct=0, sc=8) 00:27:29.581 starting I/O failed 00:27:29.581 Write completed with error (sct=0, sc=8) 00:27:29.581 starting I/O failed 00:27:29.581 Write completed with error (sct=0, sc=8) 00:27:29.581 starting I/O failed 00:27:29.581 Read completed with error (sct=0, sc=8) 00:27:29.581 starting I/O failed 00:27:29.840 [2024-05-16 20:36:42.575346] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.840 [2024-05-16 20:36:42.582786] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.840 [2024-05-16 20:36:42.582825] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.840 [2024-05-16 20:36:42.582841] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.840 [2024-05-16 20:36:42.582848] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.840 [2024-05-16 20:36:42.582853] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:27:29.840 [2024-05-16 20:36:42.593498] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.840 qpair failed and we were unable to recover it. 00:27:29.840 [2024-05-16 20:36:42.603119] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.840 [2024-05-16 20:36:42.603161] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.840 [2024-05-16 20:36:42.603177] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.840 [2024-05-16 20:36:42.603184] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.840 [2024-05-16 20:36:42.603191] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:27:29.840 [2024-05-16 20:36:42.613437] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.840 qpair failed and we were unable to recover it. 00:27:29.840 [2024-05-16 20:36:42.613553] nvme_ctrlr.c:4341:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:27:29.840 A controller has encountered a failure and is being reset. 00:27:29.840 [2024-05-16 20:36:42.623211] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.840 [2024-05-16 20:36:42.623256] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.840 [2024-05-16 20:36:42.623283] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.840 [2024-05-16 20:36:42.623295] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.840 [2024-05-16 20:36:42.623305] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:29.840 [2024-05-16 20:36:42.633555] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:29.840 qpair failed and we were unable to recover it. 00:27:29.840 [2024-05-16 20:36:42.643244] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.840 [2024-05-16 20:36:42.643283] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.840 [2024-05-16 20:36:42.643299] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.840 [2024-05-16 20:36:42.643310] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.840 [2024-05-16 20:36:42.643317] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:29.840 [2024-05-16 20:36:42.653647] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:29.840 qpair failed and we were unable to recover it. 00:27:29.840 [2024-05-16 20:36:42.653807] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:27:29.840 [2024-05-16 20:36:42.684801] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:27:29.840 Controller properly reset. 00:27:29.840 Initializing NVMe Controllers 00:27:29.840 Attaching to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:27:29.840 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:27:29.840 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:27:29.840 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:27:29.840 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:27:29.840 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:27:29.840 Initialization complete. Launching workers. 00:27:29.840 Starting thread on core 1 00:27:29.840 Starting thread on core 2 00:27:29.840 Starting thread on core 3 00:27:29.840 Starting thread on core 0 00:27:29.840 20:36:42 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:27:29.840 00:27:29.840 real 0m13.527s 00:27:29.840 user 0m29.594s 00:27:29.840 sys 0m2.466s 00:27:29.840 20:36:42 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:27:29.840 20:36:42 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:29.840 ************************************ 00:27:29.840 END TEST nvmf_target_disconnect_tc2 00:27:29.840 ************************************ 00:27:29.840 20:36:42 nvmf_rdma.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n 192.168.100.9 ']' 00:27:29.840 20:36:42 nvmf_rdma.nvmf_target_disconnect -- host/target_disconnect.sh@73 -- # run_test nvmf_target_disconnect_tc3 nvmf_target_disconnect_tc3 00:27:29.840 20:36:42 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:27:29.840 20:36:42 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@1103 -- # xtrace_disable 00:27:29.840 20:36:42 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:27:29.840 ************************************ 00:27:29.841 START TEST nvmf_target_disconnect_tc3 00:27:29.841 ************************************ 00:27:29.841 20:36:42 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@1121 -- # nvmf_target_disconnect_tc3 00:27:29.841 20:36:42 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@57 -- # reconnectpid=3214082 00:27:29.841 20:36:42 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@59 -- # sleep 2 00:27:29.841 20:36:42 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 alt_traddr:192.168.100.9' 00:27:30.099 EAL: No free 2048 kB hugepages reported on node 1 00:27:31.997 20:36:44 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@60 -- # kill -9 3212671 00:27:31.997 20:36:44 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@62 -- # sleep 2 00:27:33.371 Write completed with error (sct=0, sc=8) 00:27:33.371 starting I/O failed 00:27:33.371 Read completed with error (sct=0, sc=8) 00:27:33.371 starting I/O failed 00:27:33.371 Read completed with error (sct=0, sc=8) 00:27:33.371 starting I/O failed 00:27:33.371 Write completed with error (sct=0, sc=8) 00:27:33.371 starting I/O failed 00:27:33.371 Read completed with error (sct=0, sc=8) 00:27:33.371 starting I/O failed 00:27:33.371 Write completed with error (sct=0, sc=8) 00:27:33.371 starting I/O failed 00:27:33.371 Write completed with error (sct=0, sc=8) 00:27:33.371 starting I/O failed 00:27:33.371 Read completed with error (sct=0, sc=8) 00:27:33.371 starting I/O failed 00:27:33.371 Read completed with error (sct=0, sc=8) 00:27:33.371 starting I/O failed 00:27:33.371 Write completed with error (sct=0, sc=8) 00:27:33.371 starting I/O failed 00:27:33.371 Write completed with error (sct=0, sc=8) 00:27:33.371 starting I/O failed 00:27:33.371 Read completed with error (sct=0, sc=8) 00:27:33.371 starting I/O failed 00:27:33.371 Write completed with error (sct=0, sc=8) 00:27:33.371 starting I/O failed 00:27:33.371 Write completed with error (sct=0, sc=8) 00:27:33.371 starting I/O failed 00:27:33.371 Read completed with error (sct=0, sc=8) 00:27:33.371 starting I/O failed 00:27:33.371 Write completed with error (sct=0, sc=8) 00:27:33.371 starting I/O failed 00:27:33.371 Write completed with error (sct=0, sc=8) 00:27:33.371 starting I/O failed 00:27:33.371 Read completed with error (sct=0, sc=8) 00:27:33.371 starting I/O failed 00:27:33.371 Write completed with error (sct=0, sc=8) 00:27:33.371 starting I/O failed 00:27:33.371 Read completed with error (sct=0, sc=8) 00:27:33.371 starting I/O failed 00:27:33.371 Write completed with error (sct=0, sc=8) 00:27:33.371 starting I/O failed 00:27:33.371 Read completed with error (sct=0, sc=8) 00:27:33.371 starting I/O failed 00:27:33.371 Read completed with error (sct=0, sc=8) 00:27:33.371 starting I/O failed 00:27:33.371 Read completed with error (sct=0, sc=8) 00:27:33.371 starting I/O failed 00:27:33.371 Read completed with error (sct=0, sc=8) 00:27:33.371 starting I/O failed 00:27:33.371 Write completed with error (sct=0, sc=8) 00:27:33.371 starting I/O failed 00:27:33.371 Read completed with error (sct=0, sc=8) 00:27:33.371 starting I/O failed 00:27:33.371 Read completed with error (sct=0, sc=8) 00:27:33.371 starting I/O failed 00:27:33.371 Read completed with error (sct=0, sc=8) 00:27:33.371 starting I/O failed 00:27:33.371 Write completed with error (sct=0, sc=8) 00:27:33.371 starting I/O failed 00:27:33.371 Read completed with error (sct=0, sc=8) 00:27:33.371 starting I/O failed 00:27:33.371 Write completed with error (sct=0, sc=8) 00:27:33.371 starting I/O failed 00:27:33.371 [2024-05-16 20:36:45.980009] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:33.938 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 54: 3212671 Killed "${NVMF_APP[@]}" "$@" 00:27:33.938 20:36:46 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@63 -- # disconnect_init 192.168.100.9 00:27:33.938 20:36:46 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:27:33.938 20:36:46 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:33.938 20:36:46 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@720 -- # xtrace_disable 00:27:33.938 20:36:46 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:33.938 20:36:46 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@481 -- # nvmfpid=3214625 00:27:33.938 20:36:46 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@482 -- # waitforlisten 3214625 00:27:33.938 20:36:46 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:27:33.938 20:36:46 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@827 -- # '[' -z 3214625 ']' 00:27:33.938 20:36:46 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:33.938 20:36:46 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:33.938 20:36:46 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:33.938 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:33.938 20:36:46 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:33.938 20:36:46 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:33.938 [2024-05-16 20:36:46.865453] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:27:33.938 [2024-05-16 20:36:46.865502] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:33.938 EAL: No free 2048 kB hugepages reported on node 1 00:27:34.196 [2024-05-16 20:36:46.939071] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:34.196 Write completed with error (sct=0, sc=8) 00:27:34.196 starting I/O failed 00:27:34.196 Read completed with error (sct=0, sc=8) 00:27:34.196 starting I/O failed 00:27:34.196 Write completed with error (sct=0, sc=8) 00:27:34.196 starting I/O failed 00:27:34.196 Write completed with error (sct=0, sc=8) 00:27:34.196 starting I/O failed 00:27:34.196 Read completed with error (sct=0, sc=8) 00:27:34.197 starting I/O failed 00:27:34.197 Read completed with error (sct=0, sc=8) 00:27:34.197 starting I/O failed 00:27:34.197 Read completed with error (sct=0, sc=8) 00:27:34.197 starting I/O failed 00:27:34.197 Read completed with error (sct=0, sc=8) 00:27:34.197 starting I/O failed 00:27:34.197 Read completed with error (sct=0, sc=8) 00:27:34.197 starting I/O failed 00:27:34.197 Write completed with error (sct=0, sc=8) 00:27:34.197 starting I/O failed 00:27:34.197 Read completed with error (sct=0, sc=8) 00:27:34.197 starting I/O failed 00:27:34.197 Write completed with error (sct=0, sc=8) 00:27:34.197 starting I/O failed 00:27:34.197 Read completed with error (sct=0, sc=8) 00:27:34.197 starting I/O failed 00:27:34.197 Write completed with error (sct=0, sc=8) 00:27:34.197 starting I/O failed 00:27:34.197 Read completed with error (sct=0, sc=8) 00:27:34.197 starting I/O failed 00:27:34.197 Read completed with error (sct=0, sc=8) 00:27:34.197 starting I/O failed 00:27:34.197 Read completed with error (sct=0, sc=8) 00:27:34.197 starting I/O failed 00:27:34.197 Read completed with error (sct=0, sc=8) 00:27:34.197 starting I/O failed 00:27:34.197 Write completed with error (sct=0, sc=8) 00:27:34.197 starting I/O failed 00:27:34.197 Read completed with error (sct=0, sc=8) 00:27:34.197 starting I/O failed 00:27:34.197 Read completed with error (sct=0, sc=8) 00:27:34.197 starting I/O failed 00:27:34.197 Write completed with error (sct=0, sc=8) 00:27:34.197 starting I/O failed 00:27:34.197 Read completed with error (sct=0, sc=8) 00:27:34.197 starting I/O failed 00:27:34.197 Read completed with error (sct=0, sc=8) 00:27:34.197 starting I/O failed 00:27:34.197 Read completed with error (sct=0, sc=8) 00:27:34.197 starting I/O failed 00:27:34.197 Write completed with error (sct=0, sc=8) 00:27:34.197 starting I/O failed 00:27:34.197 Read completed with error (sct=0, sc=8) 00:27:34.197 starting I/O failed 00:27:34.197 Read completed with error (sct=0, sc=8) 00:27:34.197 starting I/O failed 00:27:34.197 Read completed with error (sct=0, sc=8) 00:27:34.197 starting I/O failed 00:27:34.197 Read completed with error (sct=0, sc=8) 00:27:34.197 starting I/O failed 00:27:34.197 Read completed with error (sct=0, sc=8) 00:27:34.197 starting I/O failed 00:27:34.197 Write completed with error (sct=0, sc=8) 00:27:34.197 starting I/O failed 00:27:34.197 [2024-05-16 20:36:46.985111] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:34.197 [2024-05-16 20:36:47.011235] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:34.197 [2024-05-16 20:36:47.011268] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:34.197 [2024-05-16 20:36:47.011275] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:34.197 [2024-05-16 20:36:47.011281] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:34.197 [2024-05-16 20:36:47.011286] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:34.197 [2024-05-16 20:36:47.011372] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:27:34.197 [2024-05-16 20:36:47.011501] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:27:34.197 [2024-05-16 20:36:47.011607] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:27:34.197 [2024-05-16 20:36:47.011608] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:27:34.763 20:36:47 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:34.763 20:36:47 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@860 -- # return 0 00:27:34.763 20:36:47 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:34.763 20:36:47 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:34.763 20:36:47 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:34.763 20:36:47 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:34.763 20:36:47 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:34.763 20:36:47 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:34.763 20:36:47 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:34.763 Malloc0 00:27:34.763 20:36:47 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:34.763 20:36:47 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:27:34.763 20:36:47 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:34.763 20:36:47 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:35.021 [2024-05-16 20:36:47.757200] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x2505a10/0x25116e0) succeed. 00:27:35.021 [2024-05-16 20:36:47.767685] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x2507050/0x25b1790) succeed. 00:27:35.021 20:36:47 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:35.021 20:36:47 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:35.021 20:36:47 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:35.021 20:36:47 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:35.021 20:36:47 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:35.021 20:36:47 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:35.021 20:36:47 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:35.021 20:36:47 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:35.021 20:36:47 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:35.021 20:36:47 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.9 -s 4420 00:27:35.021 20:36:47 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:35.021 20:36:47 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:35.021 [2024-05-16 20:36:47.906797] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:27:35.021 [2024-05-16 20:36:47.907213] rdma.c:3032:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.9 port 4420 *** 00:27:35.021 20:36:47 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:35.021 20:36:47 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.9 -s 4420 00:27:35.021 20:36:47 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:35.021 20:36:47 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:35.021 20:36:47 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:35.021 20:36:47 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@65 -- # wait 3214082 00:27:35.021 Read completed with error (sct=0, sc=8) 00:27:35.021 starting I/O failed 00:27:35.021 Read completed with error (sct=0, sc=8) 00:27:35.021 starting I/O failed 00:27:35.021 Read completed with error (sct=0, sc=8) 00:27:35.021 starting I/O failed 00:27:35.021 Read completed with error (sct=0, sc=8) 00:27:35.021 starting I/O failed 00:27:35.021 Write completed with error (sct=0, sc=8) 00:27:35.021 starting I/O failed 00:27:35.021 Write completed with error (sct=0, sc=8) 00:27:35.021 starting I/O failed 00:27:35.021 Write completed with error (sct=0, sc=8) 00:27:35.021 starting I/O failed 00:27:35.021 Write completed with error (sct=0, sc=8) 00:27:35.021 starting I/O failed 00:27:35.021 Read completed with error (sct=0, sc=8) 00:27:35.021 starting I/O failed 00:27:35.021 Write completed with error (sct=0, sc=8) 00:27:35.021 starting I/O failed 00:27:35.021 Write completed with error (sct=0, sc=8) 00:27:35.021 starting I/O failed 00:27:35.021 Read completed with error (sct=0, sc=8) 00:27:35.021 starting I/O failed 00:27:35.021 Write completed with error (sct=0, sc=8) 00:27:35.022 starting I/O failed 00:27:35.022 Read completed with error (sct=0, sc=8) 00:27:35.022 starting I/O failed 00:27:35.022 Read completed with error (sct=0, sc=8) 00:27:35.022 starting I/O failed 00:27:35.022 Read completed with error (sct=0, sc=8) 00:27:35.022 starting I/O failed 00:27:35.022 Read completed with error (sct=0, sc=8) 00:27:35.022 starting I/O failed 00:27:35.022 Write completed with error (sct=0, sc=8) 00:27:35.022 starting I/O failed 00:27:35.022 Write completed with error (sct=0, sc=8) 00:27:35.022 starting I/O failed 00:27:35.022 Write completed with error (sct=0, sc=8) 00:27:35.022 starting I/O failed 00:27:35.022 Read completed with error (sct=0, sc=8) 00:27:35.022 starting I/O failed 00:27:35.022 Read completed with error (sct=0, sc=8) 00:27:35.022 starting I/O failed 00:27:35.022 Write completed with error (sct=0, sc=8) 00:27:35.022 starting I/O failed 00:27:35.022 Read completed with error (sct=0, sc=8) 00:27:35.022 starting I/O failed 00:27:35.022 Read completed with error (sct=0, sc=8) 00:27:35.022 starting I/O failed 00:27:35.022 Write completed with error (sct=0, sc=8) 00:27:35.022 starting I/O failed 00:27:35.022 Write completed with error (sct=0, sc=8) 00:27:35.022 starting I/O failed 00:27:35.022 Write completed with error (sct=0, sc=8) 00:27:35.022 starting I/O failed 00:27:35.022 Read completed with error (sct=0, sc=8) 00:27:35.022 starting I/O failed 00:27:35.022 Write completed with error (sct=0, sc=8) 00:27:35.022 starting I/O failed 00:27:35.022 Write completed with error (sct=0, sc=8) 00:27:35.022 starting I/O failed 00:27:35.022 Write completed with error (sct=0, sc=8) 00:27:35.022 starting I/O failed 00:27:35.022 [2024-05-16 20:36:47.990134] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:35.022 [2024-05-16 20:36:47.991710] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:27:35.022 [2024-05-16 20:36:47.991728] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:27:35.022 [2024-05-16 20:36:47.991734] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:27:36.396 [2024-05-16 20:36:48.995721] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:36.396 qpair failed and we were unable to recover it. 00:27:36.396 [2024-05-16 20:36:48.997252] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:27:36.396 [2024-05-16 20:36:48.997267] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:27:36.396 [2024-05-16 20:36:48.997273] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:27:37.330 [2024-05-16 20:36:50.001257] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:37.330 qpair failed and we were unable to recover it. 00:27:37.330 [2024-05-16 20:36:50.002760] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:27:37.330 [2024-05-16 20:36:50.002775] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:27:37.330 [2024-05-16 20:36:50.002781] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:27:38.264 [2024-05-16 20:36:51.006468] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:38.264 qpair failed and we were unable to recover it. 00:27:38.264 [2024-05-16 20:36:51.007785] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:27:38.264 [2024-05-16 20:36:51.007801] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:27:38.264 [2024-05-16 20:36:51.007810] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:27:39.200 [2024-05-16 20:36:52.011773] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:39.200 qpair failed and we were unable to recover it. 00:27:39.200 [2024-05-16 20:36:52.013306] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:27:39.200 [2024-05-16 20:36:52.013321] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:27:39.200 [2024-05-16 20:36:52.013327] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:27:40.133 [2024-05-16 20:36:53.017114] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:40.133 qpair failed and we were unable to recover it. 00:27:40.133 [2024-05-16 20:36:53.018605] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:27:40.133 [2024-05-16 20:36:53.018620] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:27:40.133 [2024-05-16 20:36:53.018626] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:27:41.064 [2024-05-16 20:36:54.022599] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:41.064 qpair failed and we were unable to recover it. 00:27:41.065 [2024-05-16 20:36:54.023953] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:27:41.065 [2024-05-16 20:36:54.023969] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:27:41.065 [2024-05-16 20:36:54.023975] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:27:42.437 [2024-05-16 20:36:55.027928] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:42.437 qpair failed and we were unable to recover it. 00:27:43.372 Write completed with error (sct=0, sc=8) 00:27:43.372 starting I/O failed 00:27:43.372 Write completed with error (sct=0, sc=8) 00:27:43.372 starting I/O failed 00:27:43.372 Write completed with error (sct=0, sc=8) 00:27:43.372 starting I/O failed 00:27:43.372 Read completed with error (sct=0, sc=8) 00:27:43.372 starting I/O failed 00:27:43.372 Read completed with error (sct=0, sc=8) 00:27:43.372 starting I/O failed 00:27:43.372 Write completed with error (sct=0, sc=8) 00:27:43.372 starting I/O failed 00:27:43.372 Read completed with error (sct=0, sc=8) 00:27:43.372 starting I/O failed 00:27:43.372 Write completed with error (sct=0, sc=8) 00:27:43.372 starting I/O failed 00:27:43.372 Read completed with error (sct=0, sc=8) 00:27:43.372 starting I/O failed 00:27:43.372 Read completed with error (sct=0, sc=8) 00:27:43.372 starting I/O failed 00:27:43.372 Write completed with error (sct=0, sc=8) 00:27:43.372 starting I/O failed 00:27:43.372 Read completed with error (sct=0, sc=8) 00:27:43.372 starting I/O failed 00:27:43.372 Read completed with error (sct=0, sc=8) 00:27:43.372 starting I/O failed 00:27:43.372 Write completed with error (sct=0, sc=8) 00:27:43.372 starting I/O failed 00:27:43.372 Read completed with error (sct=0, sc=8) 00:27:43.372 starting I/O failed 00:27:43.372 Write completed with error (sct=0, sc=8) 00:27:43.372 starting I/O failed 00:27:43.372 Write completed with error (sct=0, sc=8) 00:27:43.372 starting I/O failed 00:27:43.373 Write completed with error (sct=0, sc=8) 00:27:43.373 starting I/O failed 00:27:43.373 Write completed with error (sct=0, sc=8) 00:27:43.373 starting I/O failed 00:27:43.373 Read completed with error (sct=0, sc=8) 00:27:43.373 starting I/O failed 00:27:43.373 Read completed with error (sct=0, sc=8) 00:27:43.373 starting I/O failed 00:27:43.373 Write completed with error (sct=0, sc=8) 00:27:43.373 starting I/O failed 00:27:43.373 Write completed with error (sct=0, sc=8) 00:27:43.373 starting I/O failed 00:27:43.373 Write completed with error (sct=0, sc=8) 00:27:43.373 starting I/O failed 00:27:43.373 Write completed with error (sct=0, sc=8) 00:27:43.373 starting I/O failed 00:27:43.373 Read completed with error (sct=0, sc=8) 00:27:43.373 starting I/O failed 00:27:43.373 Write completed with error (sct=0, sc=8) 00:27:43.373 starting I/O failed 00:27:43.373 Write completed with error (sct=0, sc=8) 00:27:43.373 starting I/O failed 00:27:43.373 Write completed with error (sct=0, sc=8) 00:27:43.373 starting I/O failed 00:27:43.373 Write completed with error (sct=0, sc=8) 00:27:43.373 starting I/O failed 00:27:43.373 Read completed with error (sct=0, sc=8) 00:27:43.373 starting I/O failed 00:27:43.373 Read completed with error (sct=0, sc=8) 00:27:43.373 starting I/O failed 00:27:43.373 [2024-05-16 20:36:56.032989] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:43.373 [2024-05-16 20:36:56.033013] nvme_ctrlr.c:4341:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:27:43.373 A controller has encountered a failure and is being reset. 00:27:43.373 Resorting to new failover address 192.168.100.9 00:27:43.373 [2024-05-16 20:36:56.033096] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:43.373 [2024-05-16 20:36:56.033149] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:27:43.373 [2024-05-16 20:36:56.064573] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:27:43.373 Controller properly reset. 00:27:43.373 Initializing NVMe Controllers 00:27:43.373 Attaching to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:27:43.373 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:27:43.373 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:27:43.373 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:27:43.373 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:27:43.373 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:27:43.373 Initialization complete. Launching workers. 00:27:43.373 Starting thread on core 1 00:27:43.373 Starting thread on core 2 00:27:43.373 Starting thread on core 3 00:27:43.373 Starting thread on core 0 00:27:43.373 20:36:56 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@66 -- # sync 00:27:43.373 00:27:43.373 real 0m13.342s 00:27:43.373 user 0m58.113s 00:27:43.373 sys 0m3.041s 00:27:43.373 20:36:56 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:27:43.373 20:36:56 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:43.373 ************************************ 00:27:43.373 END TEST nvmf_target_disconnect_tc3 00:27:43.373 ************************************ 00:27:43.373 20:36:56 nvmf_rdma.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:27:43.373 20:36:56 nvmf_rdma.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:27:43.373 20:36:56 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:43.373 20:36:56 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@117 -- # sync 00:27:43.373 20:36:56 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:27:43.373 20:36:56 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:27:43.373 20:36:56 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@120 -- # set +e 00:27:43.373 20:36:56 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:43.373 20:36:56 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:27:43.373 rmmod nvme_rdma 00:27:43.373 rmmod nvme_fabrics 00:27:43.373 20:36:56 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:43.373 20:36:56 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set -e 00:27:43.373 20:36:56 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@125 -- # return 0 00:27:43.373 20:36:56 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@489 -- # '[' -n 3214625 ']' 00:27:43.373 20:36:56 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@490 -- # killprocess 3214625 00:27:43.373 20:36:56 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@946 -- # '[' -z 3214625 ']' 00:27:43.373 20:36:56 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@950 -- # kill -0 3214625 00:27:43.373 20:36:56 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@951 -- # uname 00:27:43.373 20:36:56 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:43.373 20:36:56 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3214625 00:27:43.373 20:36:56 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@952 -- # process_name=reactor_4 00:27:43.373 20:36:56 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # '[' reactor_4 = sudo ']' 00:27:43.373 20:36:56 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3214625' 00:27:43.373 killing process with pid 3214625 00:27:43.373 20:36:56 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@965 -- # kill 3214625 00:27:43.373 [2024-05-16 20:36:56.282065] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:27:43.373 20:36:56 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@970 -- # wait 3214625 00:27:43.373 [2024-05-16 20:36:56.364610] rdma.c:2885:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:27:43.632 20:36:56 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:43.632 20:36:56 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:27:43.632 00:27:43.632 real 0m34.793s 00:27:43.632 user 2m8.172s 00:27:43.632 sys 0m10.691s 00:27:43.632 20:36:56 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@1122 -- # xtrace_disable 00:27:43.632 20:36:56 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:27:43.632 ************************************ 00:27:43.632 END TEST nvmf_target_disconnect 00:27:43.632 ************************************ 00:27:43.632 20:36:56 nvmf_rdma -- nvmf/nvmf.sh@125 -- # timing_exit host 00:27:43.632 20:36:56 nvmf_rdma -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:43.632 20:36:56 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:27:43.891 20:36:56 nvmf_rdma -- nvmf/nvmf.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:27:43.891 00:27:43.891 real 20m55.413s 00:27:43.891 user 52m49.211s 00:27:43.891 sys 4m46.055s 00:27:43.891 20:36:56 nvmf_rdma -- common/autotest_common.sh@1122 -- # xtrace_disable 00:27:43.891 20:36:56 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:27:43.891 ************************************ 00:27:43.891 END TEST nvmf_rdma 00:27:43.891 ************************************ 00:27:43.891 20:36:56 -- spdk/autotest.sh@285 -- # run_test spdkcli_nvmf_rdma /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=rdma 00:27:43.891 20:36:56 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:27:43.891 20:36:56 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:27:43.891 20:36:56 -- common/autotest_common.sh@10 -- # set +x 00:27:43.891 ************************************ 00:27:43.891 START TEST spdkcli_nvmf_rdma 00:27:43.891 ************************************ 00:27:43.891 20:36:56 spdkcli_nvmf_rdma -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=rdma 00:27:43.891 * Looking for test storage... 00:27:43.891 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli 00:27:43.891 20:36:56 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/common.sh 00:27:43.891 20:36:56 spdkcli_nvmf_rdma -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:27:43.891 20:36:56 spdkcli_nvmf_rdma -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/clear_config.py 00:27:43.891 20:36:56 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:27:43.891 20:36:56 spdkcli_nvmf_rdma -- nvmf/common.sh@7 -- # uname -s 00:27:43.891 20:36:56 spdkcli_nvmf_rdma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:43.891 20:36:56 spdkcli_nvmf_rdma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:43.891 20:36:56 spdkcli_nvmf_rdma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:43.891 20:36:56 spdkcli_nvmf_rdma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:43.891 20:36:56 spdkcli_nvmf_rdma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:43.891 20:36:56 spdkcli_nvmf_rdma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:43.891 20:36:56 spdkcli_nvmf_rdma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:43.891 20:36:56 spdkcli_nvmf_rdma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:43.891 20:36:56 spdkcli_nvmf_rdma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:43.891 20:36:56 spdkcli_nvmf_rdma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:43.891 20:36:56 spdkcli_nvmf_rdma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:27:43.891 20:36:56 spdkcli_nvmf_rdma -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:27:43.891 20:36:56 spdkcli_nvmf_rdma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:43.891 20:36:56 spdkcli_nvmf_rdma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:43.891 20:36:56 spdkcli_nvmf_rdma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:43.891 20:36:56 spdkcli_nvmf_rdma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:43.891 20:36:56 spdkcli_nvmf_rdma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:27:43.891 20:36:56 spdkcli_nvmf_rdma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:43.891 20:36:56 spdkcli_nvmf_rdma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:43.891 20:36:56 spdkcli_nvmf_rdma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:43.891 20:36:56 spdkcli_nvmf_rdma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:43.891 20:36:56 spdkcli_nvmf_rdma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:43.891 20:36:56 spdkcli_nvmf_rdma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:43.891 20:36:56 spdkcli_nvmf_rdma -- paths/export.sh@5 -- # export PATH 00:27:43.891 20:36:56 spdkcli_nvmf_rdma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:43.891 20:36:56 spdkcli_nvmf_rdma -- nvmf/common.sh@47 -- # : 0 00:27:43.891 20:36:56 spdkcli_nvmf_rdma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:43.891 20:36:56 spdkcli_nvmf_rdma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:43.891 20:36:56 spdkcli_nvmf_rdma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:43.891 20:36:56 spdkcli_nvmf_rdma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:43.891 20:36:56 spdkcli_nvmf_rdma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:43.891 20:36:56 spdkcli_nvmf_rdma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:43.891 20:36:56 spdkcli_nvmf_rdma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:43.891 20:36:56 spdkcli_nvmf_rdma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:43.891 20:36:56 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:27:43.891 20:36:56 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:27:43.891 20:36:56 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:27:43.892 20:36:56 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:27:43.892 20:36:56 spdkcli_nvmf_rdma -- common/autotest_common.sh@720 -- # xtrace_disable 00:27:43.892 20:36:56 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:27:43.892 20:36:56 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:27:43.892 20:36:56 spdkcli_nvmf_rdma -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=3216330 00:27:43.892 20:36:56 spdkcli_nvmf_rdma -- spdkcli/common.sh@34 -- # waitforlisten 3216330 00:27:43.892 20:36:56 spdkcli_nvmf_rdma -- common/autotest_common.sh@827 -- # '[' -z 3216330 ']' 00:27:43.892 20:36:56 spdkcli_nvmf_rdma -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:27:43.892 20:36:56 spdkcli_nvmf_rdma -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:43.892 20:36:56 spdkcli_nvmf_rdma -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:43.892 20:36:56 spdkcli_nvmf_rdma -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:43.892 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:43.892 20:36:56 spdkcli_nvmf_rdma -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:43.892 20:36:56 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:27:43.892 [2024-05-16 20:36:56.878453] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:27:43.892 [2024-05-16 20:36:56.878503] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3216330 ] 00:27:44.150 EAL: No free 2048 kB hugepages reported on node 1 00:27:44.150 [2024-05-16 20:36:56.938412] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:27:44.150 [2024-05-16 20:36:57.018273] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:44.150 [2024-05-16 20:36:57.018276] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:44.715 20:36:57 spdkcli_nvmf_rdma -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:44.715 20:36:57 spdkcli_nvmf_rdma -- common/autotest_common.sh@860 -- # return 0 00:27:44.715 20:36:57 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:27:44.715 20:36:57 spdkcli_nvmf_rdma -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:44.715 20:36:57 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:27:44.974 20:36:57 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:27:44.974 20:36:57 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@22 -- # [[ rdma == \r\d\m\a ]] 00:27:44.974 20:36:57 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@23 -- # nvmftestinit 00:27:44.974 20:36:57 spdkcli_nvmf_rdma -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:27:44.974 20:36:57 spdkcli_nvmf_rdma -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:44.974 20:36:57 spdkcli_nvmf_rdma -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:44.974 20:36:57 spdkcli_nvmf_rdma -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:44.974 20:36:57 spdkcli_nvmf_rdma -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:44.974 20:36:57 spdkcli_nvmf_rdma -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:44.974 20:36:57 spdkcli_nvmf_rdma -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:27:44.974 20:36:57 spdkcli_nvmf_rdma -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:44.974 20:36:57 spdkcli_nvmf_rdma -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:44.974 20:36:57 spdkcli_nvmf_rdma -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:44.974 20:36:57 spdkcli_nvmf_rdma -- nvmf/common.sh@285 -- # xtrace_disable 00:27:44.974 20:36:57 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:27:51.532 20:37:03 spdkcli_nvmf_rdma -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:51.532 20:37:03 spdkcli_nvmf_rdma -- nvmf/common.sh@291 -- # pci_devs=() 00:27:51.532 20:37:03 spdkcli_nvmf_rdma -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:51.532 20:37:03 spdkcli_nvmf_rdma -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:51.532 20:37:03 spdkcli_nvmf_rdma -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:51.532 20:37:03 spdkcli_nvmf_rdma -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:51.532 20:37:03 spdkcli_nvmf_rdma -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:51.532 20:37:03 spdkcli_nvmf_rdma -- nvmf/common.sh@295 -- # net_devs=() 00:27:51.532 20:37:03 spdkcli_nvmf_rdma -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:51.532 20:37:03 spdkcli_nvmf_rdma -- nvmf/common.sh@296 -- # e810=() 00:27:51.532 20:37:03 spdkcli_nvmf_rdma -- nvmf/common.sh@296 -- # local -ga e810 00:27:51.532 20:37:03 spdkcli_nvmf_rdma -- nvmf/common.sh@297 -- # x722=() 00:27:51.532 20:37:03 spdkcli_nvmf_rdma -- nvmf/common.sh@297 -- # local -ga x722 00:27:51.532 20:37:03 spdkcli_nvmf_rdma -- nvmf/common.sh@298 -- # mlx=() 00:27:51.532 20:37:03 spdkcli_nvmf_rdma -- nvmf/common.sh@298 -- # local -ga mlx 00:27:51.532 20:37:03 spdkcli_nvmf_rdma -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:51.532 20:37:03 spdkcli_nvmf_rdma -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:51.532 20:37:03 spdkcli_nvmf_rdma -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:51.532 20:37:03 spdkcli_nvmf_rdma -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:51.532 20:37:03 spdkcli_nvmf_rdma -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:51.532 20:37:03 spdkcli_nvmf_rdma -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:51.532 20:37:03 spdkcli_nvmf_rdma -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:51.532 20:37:03 spdkcli_nvmf_rdma -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:51.532 20:37:03 spdkcli_nvmf_rdma -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:51.532 20:37:03 spdkcli_nvmf_rdma -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:51.532 20:37:03 spdkcli_nvmf_rdma -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:51.532 20:37:03 spdkcli_nvmf_rdma -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:51.532 20:37:03 spdkcli_nvmf_rdma -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:27:51.532 20:37:03 spdkcli_nvmf_rdma -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:27:51.532 20:37:03 spdkcli_nvmf_rdma -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:27:51.532 20:37:03 spdkcli_nvmf_rdma -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:27:51.532 20:37:03 spdkcli_nvmf_rdma -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:27:51.532 20:37:03 spdkcli_nvmf_rdma -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:51.532 20:37:03 spdkcli_nvmf_rdma -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:51.532 20:37:03 spdkcli_nvmf_rdma -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:27:51.532 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:27:51.532 20:37:03 spdkcli_nvmf_rdma -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:27:51.532 20:37:03 spdkcli_nvmf_rdma -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:27:51.532 20:37:03 spdkcli_nvmf_rdma -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:27:51.532 20:37:03 spdkcli_nvmf_rdma -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:27:51.532 20:37:03 spdkcli_nvmf_rdma -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:27:51.532 20:37:03 spdkcli_nvmf_rdma -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:27:51.532 20:37:03 spdkcli_nvmf_rdma -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:51.532 20:37:03 spdkcli_nvmf_rdma -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:27:51.532 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:27:51.532 20:37:03 spdkcli_nvmf_rdma -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:27:51.532 20:37:03 spdkcli_nvmf_rdma -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:27:51.532 20:37:03 spdkcli_nvmf_rdma -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:27:51.532 20:37:03 spdkcli_nvmf_rdma -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:27:51.532 20:37:03 spdkcli_nvmf_rdma -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:27:51.532 20:37:03 spdkcli_nvmf_rdma -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:27:51.532 20:37:03 spdkcli_nvmf_rdma -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:51.532 20:37:03 spdkcli_nvmf_rdma -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:27:51.532 20:37:03 spdkcli_nvmf_rdma -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:51.532 20:37:03 spdkcli_nvmf_rdma -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:51.532 20:37:03 spdkcli_nvmf_rdma -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:27:51.532 20:37:03 spdkcli_nvmf_rdma -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:51.532 20:37:03 spdkcli_nvmf_rdma -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:51.532 20:37:03 spdkcli_nvmf_rdma -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:27:51.532 Found net devices under 0000:da:00.0: mlx_0_0 00:27:51.532 20:37:03 spdkcli_nvmf_rdma -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:51.532 20:37:03 spdkcli_nvmf_rdma -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:51.532 20:37:03 spdkcli_nvmf_rdma -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:51.532 20:37:03 spdkcli_nvmf_rdma -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:27:51.532 20:37:03 spdkcli_nvmf_rdma -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:51.532 20:37:03 spdkcli_nvmf_rdma -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:51.532 20:37:03 spdkcli_nvmf_rdma -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:27:51.532 Found net devices under 0000:da:00.1: mlx_0_1 00:27:51.532 20:37:03 spdkcli_nvmf_rdma -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:51.532 20:37:03 spdkcli_nvmf_rdma -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:51.532 20:37:03 spdkcli_nvmf_rdma -- nvmf/common.sh@414 -- # is_hw=yes 00:27:51.532 20:37:03 spdkcli_nvmf_rdma -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:51.532 20:37:03 spdkcli_nvmf_rdma -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:27:51.532 20:37:03 spdkcli_nvmf_rdma -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:27:51.532 20:37:03 spdkcli_nvmf_rdma -- nvmf/common.sh@420 -- # rdma_device_init 00:27:51.532 20:37:03 spdkcli_nvmf_rdma -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:27:51.532 20:37:03 spdkcli_nvmf_rdma -- nvmf/common.sh@58 -- # uname 00:27:51.532 20:37:03 spdkcli_nvmf_rdma -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:27:51.532 20:37:03 spdkcli_nvmf_rdma -- nvmf/common.sh@62 -- # modprobe ib_cm 00:27:51.532 20:37:03 spdkcli_nvmf_rdma -- nvmf/common.sh@63 -- # modprobe ib_core 00:27:51.532 20:37:03 spdkcli_nvmf_rdma -- nvmf/common.sh@64 -- # modprobe ib_umad 00:27:51.532 20:37:03 spdkcli_nvmf_rdma -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:27:51.532 20:37:03 spdkcli_nvmf_rdma -- nvmf/common.sh@66 -- # modprobe iw_cm 00:27:51.532 20:37:03 spdkcli_nvmf_rdma -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:27:51.532 20:37:03 spdkcli_nvmf_rdma -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:27:51.532 20:37:03 spdkcli_nvmf_rdma -- nvmf/common.sh@502 -- # allocate_nic_ips 00:27:51.532 20:37:03 spdkcli_nvmf_rdma -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:27:51.532 20:37:03 spdkcli_nvmf_rdma -- nvmf/common.sh@73 -- # get_rdma_if_list 00:27:51.532 20:37:03 spdkcli_nvmf_rdma -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:27:51.532 20:37:03 spdkcli_nvmf_rdma -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:27:51.532 20:37:03 spdkcli_nvmf_rdma -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:27:51.532 20:37:03 spdkcli_nvmf_rdma -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:27:51.532 20:37:03 spdkcli_nvmf_rdma -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:27:51.532 20:37:03 spdkcli_nvmf_rdma -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:27:51.532 20:37:03 spdkcli_nvmf_rdma -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:51.532 20:37:03 spdkcli_nvmf_rdma -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:27:51.532 20:37:03 spdkcli_nvmf_rdma -- nvmf/common.sh@104 -- # echo mlx_0_0 00:27:51.532 20:37:03 spdkcli_nvmf_rdma -- nvmf/common.sh@105 -- # continue 2 00:27:51.532 20:37:03 spdkcli_nvmf_rdma -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:27:51.533 20:37:03 spdkcli_nvmf_rdma -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:51.533 20:37:03 spdkcli_nvmf_rdma -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:27:51.533 20:37:03 spdkcli_nvmf_rdma -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:51.533 20:37:03 spdkcli_nvmf_rdma -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:27:51.533 20:37:03 spdkcli_nvmf_rdma -- nvmf/common.sh@104 -- # echo mlx_0_1 00:27:51.533 20:37:03 spdkcli_nvmf_rdma -- nvmf/common.sh@105 -- # continue 2 00:27:51.533 20:37:03 spdkcli_nvmf_rdma -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:27:51.533 20:37:03 spdkcli_nvmf_rdma -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:27:51.533 20:37:03 spdkcli_nvmf_rdma -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:27:51.533 20:37:03 spdkcli_nvmf_rdma -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:27:51.533 20:37:03 spdkcli_nvmf_rdma -- nvmf/common.sh@113 -- # awk '{print $4}' 00:27:51.533 20:37:03 spdkcli_nvmf_rdma -- nvmf/common.sh@113 -- # cut -d/ -f1 00:27:51.533 20:37:03 spdkcli_nvmf_rdma -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:27:51.533 20:37:03 spdkcli_nvmf_rdma -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:27:51.533 20:37:03 spdkcli_nvmf_rdma -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:27:51.533 262: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:27:51.533 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:27:51.533 altname enp218s0f0np0 00:27:51.533 altname ens818f0np0 00:27:51.533 inet 192.168.100.8/24 scope global mlx_0_0 00:27:51.533 valid_lft forever preferred_lft forever 00:27:51.533 20:37:03 spdkcli_nvmf_rdma -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:27:51.533 20:37:03 spdkcli_nvmf_rdma -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:27:51.533 20:37:03 spdkcli_nvmf_rdma -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:27:51.533 20:37:03 spdkcli_nvmf_rdma -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:27:51.533 20:37:03 spdkcli_nvmf_rdma -- nvmf/common.sh@113 -- # awk '{print $4}' 00:27:51.533 20:37:03 spdkcli_nvmf_rdma -- nvmf/common.sh@113 -- # cut -d/ -f1 00:27:51.533 20:37:03 spdkcli_nvmf_rdma -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:27:51.533 20:37:03 spdkcli_nvmf_rdma -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:27:51.533 20:37:03 spdkcli_nvmf_rdma -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:27:51.533 263: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:27:51.533 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:27:51.533 altname enp218s0f1np1 00:27:51.533 altname ens818f1np1 00:27:51.533 inet 192.168.100.9/24 scope global mlx_0_1 00:27:51.533 valid_lft forever preferred_lft forever 00:27:51.533 20:37:03 spdkcli_nvmf_rdma -- nvmf/common.sh@422 -- # return 0 00:27:51.533 20:37:03 spdkcli_nvmf_rdma -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:51.533 20:37:03 spdkcli_nvmf_rdma -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:27:51.533 20:37:03 spdkcli_nvmf_rdma -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:27:51.533 20:37:03 spdkcli_nvmf_rdma -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:27:51.533 20:37:03 spdkcli_nvmf_rdma -- nvmf/common.sh@86 -- # get_rdma_if_list 00:27:51.533 20:37:03 spdkcli_nvmf_rdma -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:27:51.533 20:37:03 spdkcli_nvmf_rdma -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:27:51.533 20:37:03 spdkcli_nvmf_rdma -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:27:51.533 20:37:03 spdkcli_nvmf_rdma -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:27:51.533 20:37:03 spdkcli_nvmf_rdma -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:27:51.533 20:37:03 spdkcli_nvmf_rdma -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:27:51.533 20:37:03 spdkcli_nvmf_rdma -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:51.533 20:37:03 spdkcli_nvmf_rdma -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:27:51.533 20:37:03 spdkcli_nvmf_rdma -- nvmf/common.sh@104 -- # echo mlx_0_0 00:27:51.533 20:37:03 spdkcli_nvmf_rdma -- nvmf/common.sh@105 -- # continue 2 00:27:51.533 20:37:03 spdkcli_nvmf_rdma -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:27:51.533 20:37:03 spdkcli_nvmf_rdma -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:51.533 20:37:03 spdkcli_nvmf_rdma -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:27:51.533 20:37:03 spdkcli_nvmf_rdma -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:51.533 20:37:03 spdkcli_nvmf_rdma -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:27:51.533 20:37:03 spdkcli_nvmf_rdma -- nvmf/common.sh@104 -- # echo mlx_0_1 00:27:51.533 20:37:03 spdkcli_nvmf_rdma -- nvmf/common.sh@105 -- # continue 2 00:27:51.533 20:37:03 spdkcli_nvmf_rdma -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:27:51.533 20:37:03 spdkcli_nvmf_rdma -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:27:51.533 20:37:03 spdkcli_nvmf_rdma -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:27:51.533 20:37:03 spdkcli_nvmf_rdma -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:27:51.533 20:37:03 spdkcli_nvmf_rdma -- nvmf/common.sh@113 -- # awk '{print $4}' 00:27:51.533 20:37:03 spdkcli_nvmf_rdma -- nvmf/common.sh@113 -- # cut -d/ -f1 00:27:51.533 20:37:03 spdkcli_nvmf_rdma -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:27:51.533 20:37:03 spdkcli_nvmf_rdma -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:27:51.533 20:37:03 spdkcli_nvmf_rdma -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:27:51.533 20:37:03 spdkcli_nvmf_rdma -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:27:51.533 20:37:03 spdkcli_nvmf_rdma -- nvmf/common.sh@113 -- # awk '{print $4}' 00:27:51.533 20:37:03 spdkcli_nvmf_rdma -- nvmf/common.sh@113 -- # cut -d/ -f1 00:27:51.533 20:37:03 spdkcli_nvmf_rdma -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:27:51.533 192.168.100.9' 00:27:51.533 20:37:03 spdkcli_nvmf_rdma -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:27:51.533 192.168.100.9' 00:27:51.533 20:37:03 spdkcli_nvmf_rdma -- nvmf/common.sh@457 -- # head -n 1 00:27:51.533 20:37:03 spdkcli_nvmf_rdma -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:27:51.533 20:37:03 spdkcli_nvmf_rdma -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:27:51.533 192.168.100.9' 00:27:51.533 20:37:03 spdkcli_nvmf_rdma -- nvmf/common.sh@458 -- # tail -n +2 00:27:51.533 20:37:03 spdkcli_nvmf_rdma -- nvmf/common.sh@458 -- # head -n 1 00:27:51.533 20:37:03 spdkcli_nvmf_rdma -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:27:51.533 20:37:03 spdkcli_nvmf_rdma -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:27:51.533 20:37:03 spdkcli_nvmf_rdma -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:27:51.533 20:37:03 spdkcli_nvmf_rdma -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:27:51.533 20:37:03 spdkcli_nvmf_rdma -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:27:51.533 20:37:03 spdkcli_nvmf_rdma -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:27:51.533 20:37:03 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@24 -- # NVMF_TARGET_IP=192.168.100.8 00:27:51.533 20:37:03 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:27:51.533 20:37:03 spdkcli_nvmf_rdma -- common/autotest_common.sh@720 -- # xtrace_disable 00:27:51.533 20:37:03 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:27:51.533 20:37:03 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:27:51.533 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:27:51.533 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:27:51.533 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:27:51.533 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:27:51.533 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:27:51.533 '\''nvmf/transport create rdma max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:27:51.533 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:27:51.533 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:27:51.533 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:27:51.533 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4260 IPv4'\'' '\''192.168.100.8:4260'\'' True 00:27:51.533 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:27:51.533 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:27:51.533 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create rdma 192.168.100.8 4260 IPv4'\'' '\''192.168.100.8:4260'\'' True 00:27:51.533 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:27:51.533 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:27:51.533 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4260 IPv4'\'' '\''192.168.100.8:4260'\'' True 00:27:51.533 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4261 IPv4'\'' '\''192.168.100.8:4261'\'' True 00:27:51.533 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:27:51.533 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:27:51.533 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:27:51.533 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:27:51.533 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4261 IPv4'\'' '\''192.168.100.8:4261'\'' True 00:27:51.533 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4262 IPv4'\'' '\''192.168.100.8:4262'\'' True 00:27:51.533 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:27:51.533 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:27:51.533 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:27:51.533 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:27:51.533 ' 00:27:53.434 [2024-05-16 20:37:06.227827] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x142c050/0x157c380) succeed. 00:27:53.434 [2024-05-16 20:37:06.239792] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x142d730/0x143c200) succeed. 00:27:54.806 [2024-05-16 20:37:07.469757] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:27:54.806 [2024-05-16 20:37:07.470095] rdma.c:3032:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4260 *** 00:27:56.702 [2024-05-16 20:37:09.632888] rdma.c:3032:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4261 *** 00:27:58.598 [2024-05-16 20:37:11.491103] rdma.c:3032:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4262 *** 00:27:59.970 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:27:59.970 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:27:59.970 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:27:59.970 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:27:59.970 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:27:59.970 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:27:59.970 Executing command: ['nvmf/transport create rdma max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:27:59.970 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:27:59.970 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:27:59.970 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:27:59.970 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4260 IPv4', '192.168.100.8:4260', True] 00:27:59.970 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:27:59.970 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:27:59.970 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create rdma 192.168.100.8 4260 IPv4', '192.168.100.8:4260', True] 00:27:59.970 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:27:59.970 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:27:59.970 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4260 IPv4', '192.168.100.8:4260', True] 00:27:59.970 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4261 IPv4', '192.168.100.8:4261', True] 00:27:59.970 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:27:59.970 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:27:59.970 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:27:59.970 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:27:59.970 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4261 IPv4', '192.168.100.8:4261', True] 00:27:59.970 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4262 IPv4', '192.168.100.8:4262', True] 00:27:59.970 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:27:59.970 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:27:59.971 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:27:59.971 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:28:00.228 20:37:13 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:28:00.228 20:37:13 spdkcli_nvmf_rdma -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:00.228 20:37:13 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:28:00.228 20:37:13 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:28:00.228 20:37:13 spdkcli_nvmf_rdma -- common/autotest_common.sh@720 -- # xtrace_disable 00:28:00.228 20:37:13 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:28:00.228 20:37:13 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@69 -- # check_match 00:28:00.228 20:37:13 spdkcli_nvmf_rdma -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:28:00.485 20:37:13 spdkcli_nvmf_rdma -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:28:00.485 20:37:13 spdkcli_nvmf_rdma -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:28:00.485 20:37:13 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:28:00.485 20:37:13 spdkcli_nvmf_rdma -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:00.485 20:37:13 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:28:00.742 20:37:13 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:28:00.742 20:37:13 spdkcli_nvmf_rdma -- common/autotest_common.sh@720 -- # xtrace_disable 00:28:00.742 20:37:13 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:28:00.742 20:37:13 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:28:00.743 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:28:00.743 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:28:00.743 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:28:00.743 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete rdma 192.168.100.8 4262'\'' '\''192.168.100.8:4262'\'' 00:28:00.743 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''192.168.100.8:4261'\'' 00:28:00.743 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:28:00.743 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:28:00.743 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:28:00.743 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:28:00.743 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:28:00.743 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:28:00.743 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:28:00.743 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:28:00.743 ' 00:28:06.058 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:28:06.058 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:28:06.058 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:28:06.058 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:28:06.058 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete rdma 192.168.100.8 4262', '192.168.100.8:4262', False] 00:28:06.058 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '192.168.100.8:4261', False] 00:28:06.058 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:28:06.058 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:28:06.058 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:28:06.058 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:28:06.058 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:28:06.058 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:28:06.058 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:28:06.058 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:28:06.058 20:37:18 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:28:06.058 20:37:18 spdkcli_nvmf_rdma -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:06.058 20:37:18 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:28:06.058 20:37:18 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@90 -- # killprocess 3216330 00:28:06.058 20:37:18 spdkcli_nvmf_rdma -- common/autotest_common.sh@946 -- # '[' -z 3216330 ']' 00:28:06.058 20:37:18 spdkcli_nvmf_rdma -- common/autotest_common.sh@950 -- # kill -0 3216330 00:28:06.058 20:37:18 spdkcli_nvmf_rdma -- common/autotest_common.sh@951 -- # uname 00:28:06.058 20:37:18 spdkcli_nvmf_rdma -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:28:06.058 20:37:18 spdkcli_nvmf_rdma -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3216330 00:28:06.058 20:37:18 spdkcli_nvmf_rdma -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:28:06.058 20:37:18 spdkcli_nvmf_rdma -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:28:06.058 20:37:18 spdkcli_nvmf_rdma -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3216330' 00:28:06.058 killing process with pid 3216330 00:28:06.058 20:37:18 spdkcli_nvmf_rdma -- common/autotest_common.sh@965 -- # kill 3216330 00:28:06.058 [2024-05-16 20:37:18.511178] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:28:06.058 20:37:18 spdkcli_nvmf_rdma -- common/autotest_common.sh@970 -- # wait 3216330 00:28:06.058 [2024-05-16 20:37:18.563390] rdma.c:2885:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:28:06.058 20:37:18 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@1 -- # nvmftestfini 00:28:06.058 20:37:18 spdkcli_nvmf_rdma -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:06.058 20:37:18 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # sync 00:28:06.058 20:37:18 spdkcli_nvmf_rdma -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:28:06.058 20:37:18 spdkcli_nvmf_rdma -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:28:06.058 20:37:18 spdkcli_nvmf_rdma -- nvmf/common.sh@120 -- # set +e 00:28:06.058 20:37:18 spdkcli_nvmf_rdma -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:06.058 20:37:18 spdkcli_nvmf_rdma -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:28:06.058 rmmod nvme_rdma 00:28:06.058 rmmod nvme_fabrics 00:28:06.058 20:37:18 spdkcli_nvmf_rdma -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:06.058 20:37:18 spdkcli_nvmf_rdma -- nvmf/common.sh@124 -- # set -e 00:28:06.058 20:37:18 spdkcli_nvmf_rdma -- nvmf/common.sh@125 -- # return 0 00:28:06.058 20:37:18 spdkcli_nvmf_rdma -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:28:06.058 20:37:18 spdkcli_nvmf_rdma -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:06.058 20:37:18 spdkcli_nvmf_rdma -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:28:06.058 00:28:06.058 real 0m22.088s 00:28:06.058 user 0m46.577s 00:28:06.058 sys 0m5.517s 00:28:06.058 20:37:18 spdkcli_nvmf_rdma -- common/autotest_common.sh@1122 -- # xtrace_disable 00:28:06.058 20:37:18 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:28:06.058 ************************************ 00:28:06.058 END TEST spdkcli_nvmf_rdma 00:28:06.058 ************************************ 00:28:06.058 20:37:18 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:28:06.058 20:37:18 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:28:06.058 20:37:18 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:28:06.058 20:37:18 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:28:06.058 20:37:18 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:28:06.058 20:37:18 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:28:06.058 20:37:18 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:28:06.058 20:37:18 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:28:06.058 20:37:18 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:28:06.058 20:37:18 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:28:06.058 20:37:18 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:28:06.058 20:37:18 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:28:06.058 20:37:18 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:28:06.058 20:37:18 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:28:06.058 20:37:18 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:28:06.058 20:37:18 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:28:06.058 20:37:18 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:28:06.058 20:37:18 -- common/autotest_common.sh@720 -- # xtrace_disable 00:28:06.058 20:37:18 -- common/autotest_common.sh@10 -- # set +x 00:28:06.058 20:37:18 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:28:06.058 20:37:18 -- common/autotest_common.sh@1388 -- # local autotest_es=0 00:28:06.058 20:37:18 -- common/autotest_common.sh@1389 -- # xtrace_disable 00:28:06.058 20:37:18 -- common/autotest_common.sh@10 -- # set +x 00:28:10.252 INFO: APP EXITING 00:28:10.252 INFO: killing all VMs 00:28:10.252 INFO: killing vhost app 00:28:10.252 WARN: no vhost pid file found 00:28:10.252 INFO: EXIT DONE 00:28:12.782 Waiting for block devices as requested 00:28:12.782 0000:5f:00.0 (8086 0a54): vfio-pci -> nvme 00:28:12.782 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:28:12.782 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:28:12.782 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:28:13.040 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:28:13.040 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:28:13.040 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:28:13.040 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:28:13.298 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:28:13.298 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:28:13.298 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:28:13.298 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:28:13.556 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:28:13.556 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:28:13.556 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:28:13.813 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:28:13.813 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:28:17.100 Cleaning 00:28:17.100 Removing: /var/run/dpdk/spdk0/config 00:28:17.100 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:28:17.100 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:28:17.100 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:28:17.100 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:28:17.100 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:28:17.100 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:28:17.100 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:28:17.100 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:28:17.100 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:28:17.100 Removing: /var/run/dpdk/spdk0/hugepage_info 00:28:17.100 Removing: /var/run/dpdk/spdk1/config 00:28:17.100 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:28:17.100 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:28:17.100 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:28:17.100 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:28:17.100 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:28:17.100 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:28:17.100 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:28:17.100 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:28:17.100 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:28:17.100 Removing: /var/run/dpdk/spdk1/hugepage_info 00:28:17.100 Removing: /var/run/dpdk/spdk1/mp_socket 00:28:17.100 Removing: /var/run/dpdk/spdk2/config 00:28:17.100 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:28:17.100 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:28:17.100 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:28:17.100 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:28:17.100 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:28:17.100 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:28:17.100 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:28:17.100 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:28:17.100 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:28:17.100 Removing: /var/run/dpdk/spdk2/hugepage_info 00:28:17.100 Removing: /var/run/dpdk/spdk3/config 00:28:17.100 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:28:17.100 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:28:17.100 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:28:17.100 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:28:17.100 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:28:17.100 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:28:17.100 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:28:17.100 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:28:17.100 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:28:17.100 Removing: /var/run/dpdk/spdk3/hugepage_info 00:28:17.100 Removing: /var/run/dpdk/spdk4/config 00:28:17.100 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:28:17.100 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:28:17.100 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:28:17.100 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:28:17.100 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:28:17.100 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:28:17.100 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:28:17.100 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:28:17.100 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:28:17.100 Removing: /var/run/dpdk/spdk4/hugepage_info 00:28:17.100 Removing: /dev/shm/bdevperf_trace.pid2984794 00:28:17.100 Removing: /dev/shm/bdevperf_trace.pid3130487 00:28:17.100 Removing: /dev/shm/bdev_svc_trace.1 00:28:17.100 Removing: /dev/shm/nvmf_trace.0 00:28:17.100 Removing: /dev/shm/spdk_tgt_trace.pid2870723 00:28:17.100 Removing: /var/run/dpdk/spdk0 00:28:17.100 Removing: /var/run/dpdk/spdk1 00:28:17.100 Removing: /var/run/dpdk/spdk2 00:28:17.100 Removing: /var/run/dpdk/spdk3 00:28:17.100 Removing: /var/run/dpdk/spdk4 00:28:17.100 Removing: /var/run/dpdk/spdk_pid2868369 00:28:17.100 Removing: /var/run/dpdk/spdk_pid2869438 00:28:17.100 Removing: /var/run/dpdk/spdk_pid2870723 00:28:17.100 Removing: /var/run/dpdk/spdk_pid2871352 00:28:17.100 Removing: /var/run/dpdk/spdk_pid2872303 00:28:17.100 Removing: /var/run/dpdk/spdk_pid2872544 00:28:17.100 Removing: /var/run/dpdk/spdk_pid2873521 00:28:17.100 Removing: /var/run/dpdk/spdk_pid2873627 00:28:17.100 Removing: /var/run/dpdk/spdk_pid2873868 00:28:17.100 Removing: /var/run/dpdk/spdk_pid2878897 00:28:17.100 Removing: /var/run/dpdk/spdk_pid2880389 00:28:17.100 Removing: /var/run/dpdk/spdk_pid2880671 00:28:17.100 Removing: /var/run/dpdk/spdk_pid2880963 00:28:17.100 Removing: /var/run/dpdk/spdk_pid2881259 00:28:17.100 Removing: /var/run/dpdk/spdk_pid2881554 00:28:17.100 Removing: /var/run/dpdk/spdk_pid2881808 00:28:17.100 Removing: /var/run/dpdk/spdk_pid2882056 00:28:17.100 Removing: /var/run/dpdk/spdk_pid2882338 00:28:17.100 Removing: /var/run/dpdk/spdk_pid2883302 00:28:17.100 Removing: /var/run/dpdk/spdk_pid2886791 00:28:17.100 Removing: /var/run/dpdk/spdk_pid2887059 00:28:17.100 Removing: /var/run/dpdk/spdk_pid2887326 00:28:17.100 Removing: /var/run/dpdk/spdk_pid2887553 00:28:17.100 Removing: /var/run/dpdk/spdk_pid2887832 00:28:17.100 Removing: /var/run/dpdk/spdk_pid2888060 00:28:17.100 Removing: /var/run/dpdk/spdk_pid2888548 00:28:17.100 Removing: /var/run/dpdk/spdk_pid2888594 00:28:17.100 Removing: /var/run/dpdk/spdk_pid2888958 00:28:17.100 Removing: /var/run/dpdk/spdk_pid2889070 00:28:17.100 Removing: /var/run/dpdk/spdk_pid2889322 00:28:17.100 Removing: /var/run/dpdk/spdk_pid2889481 00:28:17.100 Removing: /var/run/dpdk/spdk_pid2889893 00:28:17.100 Removing: /var/run/dpdk/spdk_pid2890141 00:28:17.100 Removing: /var/run/dpdk/spdk_pid2890428 00:28:17.100 Removing: /var/run/dpdk/spdk_pid2890700 00:28:17.100 Removing: /var/run/dpdk/spdk_pid2890756 00:28:17.100 Removing: /var/run/dpdk/spdk_pid2891007 00:28:17.100 Removing: /var/run/dpdk/spdk_pid2891254 00:28:17.100 Removing: /var/run/dpdk/spdk_pid2891502 00:28:17.100 Removing: /var/run/dpdk/spdk_pid2891753 00:28:17.100 Removing: /var/run/dpdk/spdk_pid2891998 00:28:17.100 Removing: /var/run/dpdk/spdk_pid2892258 00:28:17.100 Removing: /var/run/dpdk/spdk_pid2892505 00:28:17.100 Removing: /var/run/dpdk/spdk_pid2892752 00:28:17.100 Removing: /var/run/dpdk/spdk_pid2893006 00:28:17.100 Removing: /var/run/dpdk/spdk_pid2893253 00:28:17.100 Removing: /var/run/dpdk/spdk_pid2893501 00:28:17.100 Removing: /var/run/dpdk/spdk_pid2893757 00:28:17.100 Removing: /var/run/dpdk/spdk_pid2894002 00:28:17.100 Removing: /var/run/dpdk/spdk_pid2894249 00:28:17.100 Removing: /var/run/dpdk/spdk_pid2894504 00:28:17.100 Removing: /var/run/dpdk/spdk_pid2894754 00:28:17.100 Removing: /var/run/dpdk/spdk_pid2895006 00:28:17.100 Removing: /var/run/dpdk/spdk_pid2895258 00:28:17.100 Removing: /var/run/dpdk/spdk_pid2895510 00:28:17.100 Removing: /var/run/dpdk/spdk_pid2895761 00:28:17.100 Removing: /var/run/dpdk/spdk_pid2896009 00:28:17.100 Removing: /var/run/dpdk/spdk_pid2896267 00:28:17.100 Removing: /var/run/dpdk/spdk_pid2896600 00:28:17.100 Removing: /var/run/dpdk/spdk_pid2900780 00:28:17.100 Removing: /var/run/dpdk/spdk_pid2944680 00:28:17.358 Removing: /var/run/dpdk/spdk_pid2948970 00:28:17.358 Removing: /var/run/dpdk/spdk_pid2959453 00:28:17.358 Removing: /var/run/dpdk/spdk_pid2964913 00:28:17.358 Removing: /var/run/dpdk/spdk_pid2968734 00:28:17.358 Removing: /var/run/dpdk/spdk_pid2969520 00:28:17.358 Removing: /var/run/dpdk/spdk_pid2984794 00:28:17.358 Removing: /var/run/dpdk/spdk_pid2985047 00:28:17.358 Removing: /var/run/dpdk/spdk_pid2989263 00:28:17.358 Removing: /var/run/dpdk/spdk_pid2995786 00:28:17.358 Removing: /var/run/dpdk/spdk_pid2998387 00:28:17.358 Removing: /var/run/dpdk/spdk_pid3008891 00:28:17.358 Removing: /var/run/dpdk/spdk_pid3034006 00:28:17.358 Removing: /var/run/dpdk/spdk_pid3037818 00:28:17.358 Removing: /var/run/dpdk/spdk_pid3085134 00:28:17.358 Removing: /var/run/dpdk/spdk_pid3100494 00:28:17.358 Removing: /var/run/dpdk/spdk_pid3128531 00:28:17.358 Removing: /var/run/dpdk/spdk_pid3129491 00:28:17.358 Removing: /var/run/dpdk/spdk_pid3130487 00:28:17.358 Removing: /var/run/dpdk/spdk_pid3134865 00:28:17.358 Removing: /var/run/dpdk/spdk_pid3142282 00:28:17.358 Removing: /var/run/dpdk/spdk_pid3143203 00:28:17.358 Removing: /var/run/dpdk/spdk_pid3144115 00:28:17.358 Removing: /var/run/dpdk/spdk_pid3145037 00:28:17.358 Removing: /var/run/dpdk/spdk_pid3145400 00:28:17.358 Removing: /var/run/dpdk/spdk_pid3150006 00:28:17.358 Removing: /var/run/dpdk/spdk_pid3150017 00:28:17.358 Removing: /var/run/dpdk/spdk_pid3154878 00:28:17.358 Removing: /var/run/dpdk/spdk_pid3155360 00:28:17.358 Removing: /var/run/dpdk/spdk_pid3156218 00:28:17.358 Removing: /var/run/dpdk/spdk_pid3156953 00:28:17.358 Removing: /var/run/dpdk/spdk_pid3157121 00:28:17.358 Removing: /var/run/dpdk/spdk_pid3161775 00:28:17.358 Removing: /var/run/dpdk/spdk_pid3162234 00:28:17.358 Removing: /var/run/dpdk/spdk_pid3166621 00:28:17.358 Removing: /var/run/dpdk/spdk_pid3169373 00:28:17.358 Removing: /var/run/dpdk/spdk_pid3175102 00:28:17.358 Removing: /var/run/dpdk/spdk_pid3184939 00:28:17.358 Removing: /var/run/dpdk/spdk_pid3184944 00:28:17.358 Removing: /var/run/dpdk/spdk_pid3204963 00:28:17.358 Removing: /var/run/dpdk/spdk_pid3205356 00:28:17.358 Removing: /var/run/dpdk/spdk_pid3211493 00:28:17.358 Removing: /var/run/dpdk/spdk_pid3211979 00:28:17.358 Removing: /var/run/dpdk/spdk_pid3214082 00:28:17.358 Removing: /var/run/dpdk/spdk_pid3216330 00:28:17.358 Clean 00:28:17.358 20:37:30 -- common/autotest_common.sh@1447 -- # return 0 00:28:17.358 20:37:30 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:28:17.358 20:37:30 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:17.358 20:37:30 -- common/autotest_common.sh@10 -- # set +x 00:28:17.615 20:37:30 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:28:17.615 20:37:30 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:17.615 20:37:30 -- common/autotest_common.sh@10 -- # set +x 00:28:17.615 20:37:30 -- spdk/autotest.sh@387 -- # chmod a+r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/timing.txt 00:28:17.615 20:37:30 -- spdk/autotest.sh@389 -- # [[ -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/udev.log ]] 00:28:17.615 20:37:30 -- spdk/autotest.sh@389 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/udev.log 00:28:17.615 20:37:30 -- spdk/autotest.sh@391 -- # hash lcov 00:28:17.615 20:37:30 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:28:17.615 20:37:30 -- spdk/autotest.sh@393 -- # hostname 00:28:17.615 20:37:30 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-phy-autotest/spdk -t spdk-wfp-05 -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_test.info 00:28:17.615 geninfo: WARNING: invalid characters removed from testname! 00:28:39.526 20:37:48 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:28:39.526 20:37:51 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:28:40.459 20:37:53 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:28:41.832 20:37:54 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:28:43.731 20:37:56 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:28:45.630 20:37:58 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:28:47.002 20:37:59 -- spdk/autotest.sh@400 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:28:47.002 20:37:59 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:28:47.002 20:37:59 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:28:47.002 20:37:59 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:47.002 20:37:59 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:47.002 20:37:59 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:47.002 20:37:59 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:47.002 20:37:59 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:47.002 20:37:59 -- paths/export.sh@5 -- $ export PATH 00:28:47.002 20:37:59 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:47.002 20:37:59 -- common/autobuild_common.sh@436 -- $ out=/var/jenkins/workspace/nvmf-phy-autotest/spdk/../output 00:28:47.002 20:37:59 -- common/autobuild_common.sh@437 -- $ date +%s 00:28:47.002 20:37:59 -- common/autobuild_common.sh@437 -- $ mktemp -dt spdk_1715884679.XXXXXX 00:28:47.002 20:37:59 -- common/autobuild_common.sh@437 -- $ SPDK_WORKSPACE=/tmp/spdk_1715884679.v1AsJt 00:28:47.002 20:37:59 -- common/autobuild_common.sh@439 -- $ [[ -n '' ]] 00:28:47.002 20:37:59 -- common/autobuild_common.sh@443 -- $ '[' -n '' ']' 00:28:47.002 20:37:59 -- common/autobuild_common.sh@446 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/' 00:28:47.002 20:37:59 -- common/autobuild_common.sh@450 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/xnvme --exclude /tmp' 00:28:47.002 20:37:59 -- common/autobuild_common.sh@452 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:28:47.003 20:37:59 -- common/autobuild_common.sh@453 -- $ get_config_params 00:28:47.003 20:37:59 -- common/autotest_common.sh@395 -- $ xtrace_disable 00:28:47.003 20:37:59 -- common/autotest_common.sh@10 -- $ set +x 00:28:47.003 20:37:59 -- common/autobuild_common.sh@453 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk' 00:28:47.003 20:37:59 -- common/autobuild_common.sh@455 -- $ start_monitor_resources 00:28:47.003 20:37:59 -- pm/common@17 -- $ local monitor 00:28:47.003 20:37:59 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:47.003 20:37:59 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:47.003 20:37:59 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:47.003 20:37:59 -- pm/common@21 -- $ date +%s 00:28:47.003 20:37:59 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:47.003 20:37:59 -- pm/common@21 -- $ date +%s 00:28:47.003 20:37:59 -- pm/common@25 -- $ sleep 1 00:28:47.003 20:37:59 -- pm/common@21 -- $ date +%s 00:28:47.003 20:37:59 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1715884679 00:28:47.003 20:37:59 -- pm/common@21 -- $ date +%s 00:28:47.003 20:37:59 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1715884679 00:28:47.003 20:37:59 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1715884679 00:28:47.003 20:37:59 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1715884679 00:28:47.003 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1715884679_collect-cpu-temp.pm.log 00:28:47.003 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1715884679_collect-cpu-load.pm.log 00:28:47.003 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1715884679_collect-vmstat.pm.log 00:28:47.003 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1715884679_collect-bmc-pm.bmc.pm.log 00:28:47.938 20:38:00 -- common/autobuild_common.sh@456 -- $ trap stop_monitor_resources EXIT 00:28:47.938 20:38:00 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j96 00:28:47.938 20:38:00 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:28:47.938 20:38:00 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:28:47.938 20:38:00 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:28:47.938 20:38:00 -- spdk/autopackage.sh@19 -- $ timing_finish 00:28:47.938 20:38:00 -- common/autotest_common.sh@732 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:28:47.938 20:38:00 -- common/autotest_common.sh@733 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:28:47.938 20:38:00 -- common/autotest_common.sh@735 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/timing.txt 00:28:48.197 20:38:00 -- spdk/autopackage.sh@20 -- $ exit 0 00:28:48.197 20:38:00 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:28:48.197 20:38:00 -- pm/common@29 -- $ signal_monitor_resources TERM 00:28:48.197 20:38:00 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:28:48.197 20:38:00 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:48.197 20:38:00 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:28:48.197 20:38:00 -- pm/common@44 -- $ pid=3231588 00:28:48.197 20:38:00 -- pm/common@50 -- $ kill -TERM 3231588 00:28:48.197 20:38:00 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:48.197 20:38:00 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:28:48.197 20:38:00 -- pm/common@44 -- $ pid=3231590 00:28:48.197 20:38:00 -- pm/common@50 -- $ kill -TERM 3231590 00:28:48.197 20:38:00 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:48.197 20:38:00 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:28:48.197 20:38:00 -- pm/common@44 -- $ pid=3231592 00:28:48.197 20:38:00 -- pm/common@50 -- $ kill -TERM 3231592 00:28:48.197 20:38:00 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:48.197 20:38:00 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:28:48.197 20:38:00 -- pm/common@44 -- $ pid=3231623 00:28:48.197 20:38:00 -- pm/common@50 -- $ sudo -E kill -TERM 3231623 00:28:48.197 + [[ -n 2761760 ]] 00:28:48.197 + sudo kill 2761760 00:28:48.205 [Pipeline] } 00:28:48.219 [Pipeline] // stage 00:28:48.223 [Pipeline] } 00:28:48.241 [Pipeline] // timeout 00:28:48.245 [Pipeline] } 00:28:48.259 [Pipeline] // catchError 00:28:48.264 [Pipeline] } 00:28:48.278 [Pipeline] // wrap 00:28:48.282 [Pipeline] } 00:28:48.295 [Pipeline] // catchError 00:28:48.302 [Pipeline] stage 00:28:48.304 [Pipeline] { (Epilogue) 00:28:48.316 [Pipeline] catchError 00:28:48.317 [Pipeline] { 00:28:48.330 [Pipeline] echo 00:28:48.331 Cleanup processes 00:28:48.334 [Pipeline] sh 00:28:48.611 + sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:28:48.611 3231718 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/sdr.cache 00:28:48.611 3231989 sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:28:48.625 [Pipeline] sh 00:28:48.911 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:28:48.911 ++ grep -v 'sudo pgrep' 00:28:48.911 ++ awk '{print $1}' 00:28:48.911 + sudo kill -9 3231718 00:28:48.923 [Pipeline] sh 00:28:49.202 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:28:57.426 [Pipeline] sh 00:28:57.710 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:28:57.710 Artifacts sizes are good 00:28:57.724 [Pipeline] archiveArtifacts 00:28:57.731 Archiving artifacts 00:28:57.856 [Pipeline] sh 00:28:58.139 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-phy-autotest 00:28:58.155 [Pipeline] cleanWs 00:28:58.165 [WS-CLEANUP] Deleting project workspace... 00:28:58.165 [WS-CLEANUP] Deferred wipeout is used... 00:28:58.171 [WS-CLEANUP] done 00:28:58.173 [Pipeline] } 00:28:58.192 [Pipeline] // catchError 00:28:58.203 [Pipeline] sh 00:28:58.479 + logger -p user.info -t JENKINS-CI 00:28:58.488 [Pipeline] } 00:28:58.502 [Pipeline] // stage 00:28:58.507 [Pipeline] } 00:28:58.523 [Pipeline] // node 00:28:58.529 [Pipeline] End of Pipeline 00:28:58.561 Finished: SUCCESS